Test Report: KVM_Linux_crio 19264

                    
                      9e9f0a1e532281828d0abd077e39f9c759354b34:2024-07-17:35371
                    
                

Test fail (34/320)

Order failed test Duration
39 TestAddons/parallel/Ingress 153.82
41 TestAddons/parallel/MetricsServer 314.66
54 TestAddons/StoppedEnableDisable 154.4
106 TestFunctional/parallel/PersistentVolumeClaim 190.14
173 TestMultiControlPlane/serial/StopSecondaryNode 141.66
175 TestMultiControlPlane/serial/RestartSecondaryNode 52.6
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 366.92
180 TestMultiControlPlane/serial/StopCluster 141.69
240 TestMultiNode/serial/RestartKeepsNodes 328.08
242 TestMultiNode/serial/StopMultiNode 141.22
249 TestPreload 298.92
257 TestKubernetesUpgrade 350.1
294 TestPause/serial/SecondStartNoReconfiguration 37.94
328 TestStartStop/group/old-k8s-version/serial/FirstStart 288.82
348 TestStartStop/group/embed-certs/serial/Stop 139.07
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.17
354 TestStartStop/group/no-preload/serial/Stop 139.04
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 84.61
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/old-k8s-version/serial/SecondStart 742.71
366 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.15
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.19
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.27
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.38
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 507.52
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 395.6
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 263.59
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 141.87
380 TestStartStop/group/newest-cni/serial/SecondStart 26.26
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
384 TestStartStop/group/newest-cni/serial/Pause 1.7
x
+
TestAddons/parallel/Ingress (153.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-384227 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-384227 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-384227 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [30ead786-c960-40a8-a321-2f7f774d10f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [30ead786-c960-40a8-a321-2f7f774d10f6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004497993s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-384227 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.175188021s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-384227 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.177
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-384227 addons disable ingress --alsologtostderr -v=1: (7.676301505s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-384227 -n addons-384227
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-384227 logs -n 25: (1.338080086s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| delete  | -p download-only-703106                                                                     | download-only-703106 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| delete  | -p download-only-962960                                                                     | download-only-962960 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| delete  | -p download-only-030322                                                                     | download-only-030322 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| delete  | -p download-only-703106                                                                     | download-only-703106 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-874768 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | binary-mirror-874768                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33007                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-874768                                                                     | binary-mirror-874768 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| addons  | disable dashboard -p                                                                        | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | addons-384227                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | addons-384227                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-384227 --wait=true                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:27 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | -p addons-384227                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | addons-384227                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | -p addons-384227                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-384227 ip                                                                            | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	| addons  | addons-384227 addons disable                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-384227 addons disable                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-384227 ssh cat                                                                       | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | /opt/local-path-provisioner/pvc-d8a1bc13-63c9-4ac2-b2eb-d06e01a50e0a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-384227 addons disable                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | addons-384227                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-384227 ssh curl -s                                                                   | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-384227 addons                                                                        | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:29 UTC | 17 Jul 24 00:29 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-384227 addons                                                                        | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:29 UTC | 17 Jul 24 00:29 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-384227 ip                                                                            | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:30 UTC | 17 Jul 24 00:30 UTC |
	| addons  | addons-384227 addons disable                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:30 UTC | 17 Jul 24 00:30 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-384227 addons disable                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:30 UTC | 17 Jul 24 00:31 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:24:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:24:27.074484   13048 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:24:27.074623   13048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:24:27.074633   13048 out.go:304] Setting ErrFile to fd 2...
	I0717 00:24:27.074637   13048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:24:27.074794   13048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:24:27.075353   13048 out.go:298] Setting JSON to false
	I0717 00:24:27.076131   13048 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":409,"bootTime":1721175458,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:24:27.076184   13048 start.go:139] virtualization: kvm guest
	I0717 00:24:27.078476   13048 out.go:177] * [addons-384227] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:24:27.080506   13048 notify.go:220] Checking for updates...
	I0717 00:24:27.080528   13048 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:24:27.082078   13048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:24:27.083578   13048 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:24:27.085073   13048 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:24:27.086486   13048 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:24:27.087949   13048 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:24:27.089576   13048 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:24:27.121502   13048 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 00:24:27.123031   13048 start.go:297] selected driver: kvm2
	I0717 00:24:27.123054   13048 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:24:27.123065   13048 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:24:27.123715   13048 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:24:27.123790   13048 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:24:27.138046   13048 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:24:27.138086   13048 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:24:27.138285   13048 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:24:27.138339   13048 cni.go:84] Creating CNI manager for ""
	I0717 00:24:27.138350   13048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:24:27.138361   13048 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:24:27.138405   13048 start.go:340] cluster config:
	{Name:addons-384227 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-384227 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:24:27.138505   13048 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:24:27.140516   13048 out.go:177] * Starting "addons-384227" primary control-plane node in "addons-384227" cluster
	I0717 00:24:27.142091   13048 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:24:27.142136   13048 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:24:27.142147   13048 cache.go:56] Caching tarball of preloaded images
	I0717 00:24:27.142214   13048 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:24:27.142224   13048 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:24:27.142525   13048 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/config.json ...
	I0717 00:24:27.142547   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/config.json: {Name:mk37e22c86742f6eea9622c68c2e24dce23ebd10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:27.142718   13048 start.go:360] acquireMachinesLock for addons-384227: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:24:27.142759   13048 start.go:364] duration metric: took 27.969µs to acquireMachinesLock for "addons-384227"
	I0717 00:24:27.142775   13048 start.go:93] Provisioning new machine with config: &{Name:addons-384227 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-384227 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:24:27.142828   13048 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 00:24:27.144468   13048 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 00:24:27.144597   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:24:27.144630   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:24:27.158336   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I0717 00:24:27.158874   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:24:27.159430   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:24:27.159453   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:24:27.159862   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:24:27.160038   13048 main.go:141] libmachine: (addons-384227) Calling .GetMachineName
	I0717 00:24:27.160218   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:27.160358   13048 start.go:159] libmachine.API.Create for "addons-384227" (driver="kvm2")
	I0717 00:24:27.160387   13048 client.go:168] LocalClient.Create starting
	I0717 00:24:27.160436   13048 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 00:24:27.275918   13048 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 00:24:27.438418   13048 main.go:141] libmachine: Running pre-create checks...
	I0717 00:24:27.438439   13048 main.go:141] libmachine: (addons-384227) Calling .PreCreateCheck
	I0717 00:24:27.438994   13048 main.go:141] libmachine: (addons-384227) Calling .GetConfigRaw
	I0717 00:24:27.439408   13048 main.go:141] libmachine: Creating machine...
	I0717 00:24:27.439423   13048 main.go:141] libmachine: (addons-384227) Calling .Create
	I0717 00:24:27.439597   13048 main.go:141] libmachine: (addons-384227) Creating KVM machine...
	I0717 00:24:27.440792   13048 main.go:141] libmachine: (addons-384227) DBG | found existing default KVM network
	I0717 00:24:27.441584   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:27.441454   13070 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091f0}
	I0717 00:24:27.441626   13048 main.go:141] libmachine: (addons-384227) DBG | created network xml: 
	I0717 00:24:27.441648   13048 main.go:141] libmachine: (addons-384227) DBG | <network>
	I0717 00:24:27.441656   13048 main.go:141] libmachine: (addons-384227) DBG |   <name>mk-addons-384227</name>
	I0717 00:24:27.441687   13048 main.go:141] libmachine: (addons-384227) DBG |   <dns enable='no'/>
	I0717 00:24:27.441701   13048 main.go:141] libmachine: (addons-384227) DBG |   
	I0717 00:24:27.441710   13048 main.go:141] libmachine: (addons-384227) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 00:24:27.441719   13048 main.go:141] libmachine: (addons-384227) DBG |     <dhcp>
	I0717 00:24:27.441728   13048 main.go:141] libmachine: (addons-384227) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 00:24:27.441734   13048 main.go:141] libmachine: (addons-384227) DBG |     </dhcp>
	I0717 00:24:27.441749   13048 main.go:141] libmachine: (addons-384227) DBG |   </ip>
	I0717 00:24:27.441760   13048 main.go:141] libmachine: (addons-384227) DBG |   
	I0717 00:24:27.441770   13048 main.go:141] libmachine: (addons-384227) DBG | </network>
	I0717 00:24:27.441782   13048 main.go:141] libmachine: (addons-384227) DBG | 
	I0717 00:24:27.446926   13048 main.go:141] libmachine: (addons-384227) DBG | trying to create private KVM network mk-addons-384227 192.168.39.0/24...
	I0717 00:24:27.509857   13048 main.go:141] libmachine: (addons-384227) DBG | private KVM network mk-addons-384227 192.168.39.0/24 created
	I0717 00:24:27.509905   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:27.509829   13070 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:24:27.509933   13048 main.go:141] libmachine: (addons-384227) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227 ...
	I0717 00:24:27.509967   13048 main.go:141] libmachine: (addons-384227) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 00:24:27.509986   13048 main.go:141] libmachine: (addons-384227) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 00:24:27.749828   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:27.749712   13070 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa...
	I0717 00:24:27.995097   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:27.994934   13070 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/addons-384227.rawdisk...
	I0717 00:24:27.995131   13048 main.go:141] libmachine: (addons-384227) DBG | Writing magic tar header
	I0717 00:24:27.995146   13048 main.go:141] libmachine: (addons-384227) DBG | Writing SSH key tar header
	I0717 00:24:27.995160   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:27.995041   13070 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227 ...
	I0717 00:24:27.995173   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227 (perms=drwx------)
	I0717 00:24:27.995192   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227
	I0717 00:24:27.995204   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 00:24:27.995213   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:24:27.995223   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 00:24:27.995232   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:24:27.995243   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:24:27.995254   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home
	I0717 00:24:27.995265   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:24:27.995280   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 00:24:27.995289   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 00:24:27.995297   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:24:27.995304   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:24:27.995311   13048 main.go:141] libmachine: (addons-384227) Creating domain...
	I0717 00:24:27.995319   13048 main.go:141] libmachine: (addons-384227) DBG | Skipping /home - not owner
	I0717 00:24:27.996216   13048 main.go:141] libmachine: (addons-384227) define libvirt domain using xml: 
	I0717 00:24:27.996240   13048 main.go:141] libmachine: (addons-384227) <domain type='kvm'>
	I0717 00:24:27.996250   13048 main.go:141] libmachine: (addons-384227)   <name>addons-384227</name>
	I0717 00:24:27.996255   13048 main.go:141] libmachine: (addons-384227)   <memory unit='MiB'>4000</memory>
	I0717 00:24:27.996260   13048 main.go:141] libmachine: (addons-384227)   <vcpu>2</vcpu>
	I0717 00:24:27.996271   13048 main.go:141] libmachine: (addons-384227)   <features>
	I0717 00:24:27.996279   13048 main.go:141] libmachine: (addons-384227)     <acpi/>
	I0717 00:24:27.996283   13048 main.go:141] libmachine: (addons-384227)     <apic/>
	I0717 00:24:27.996289   13048 main.go:141] libmachine: (addons-384227)     <pae/>
	I0717 00:24:27.996293   13048 main.go:141] libmachine: (addons-384227)     
	I0717 00:24:27.996298   13048 main.go:141] libmachine: (addons-384227)   </features>
	I0717 00:24:27.996303   13048 main.go:141] libmachine: (addons-384227)   <cpu mode='host-passthrough'>
	I0717 00:24:27.996308   13048 main.go:141] libmachine: (addons-384227)   
	I0717 00:24:27.996317   13048 main.go:141] libmachine: (addons-384227)   </cpu>
	I0717 00:24:27.996347   13048 main.go:141] libmachine: (addons-384227)   <os>
	I0717 00:24:27.996372   13048 main.go:141] libmachine: (addons-384227)     <type>hvm</type>
	I0717 00:24:27.996383   13048 main.go:141] libmachine: (addons-384227)     <boot dev='cdrom'/>
	I0717 00:24:27.996395   13048 main.go:141] libmachine: (addons-384227)     <boot dev='hd'/>
	I0717 00:24:27.996406   13048 main.go:141] libmachine: (addons-384227)     <bootmenu enable='no'/>
	I0717 00:24:27.996416   13048 main.go:141] libmachine: (addons-384227)   </os>
	I0717 00:24:27.996428   13048 main.go:141] libmachine: (addons-384227)   <devices>
	I0717 00:24:27.996438   13048 main.go:141] libmachine: (addons-384227)     <disk type='file' device='cdrom'>
	I0717 00:24:27.996492   13048 main.go:141] libmachine: (addons-384227)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/boot2docker.iso'/>
	I0717 00:24:27.996512   13048 main.go:141] libmachine: (addons-384227)       <target dev='hdc' bus='scsi'/>
	I0717 00:24:27.996518   13048 main.go:141] libmachine: (addons-384227)       <readonly/>
	I0717 00:24:27.996523   13048 main.go:141] libmachine: (addons-384227)     </disk>
	I0717 00:24:27.996534   13048 main.go:141] libmachine: (addons-384227)     <disk type='file' device='disk'>
	I0717 00:24:27.996542   13048 main.go:141] libmachine: (addons-384227)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:24:27.996549   13048 main.go:141] libmachine: (addons-384227)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/addons-384227.rawdisk'/>
	I0717 00:24:27.996559   13048 main.go:141] libmachine: (addons-384227)       <target dev='hda' bus='virtio'/>
	I0717 00:24:27.996564   13048 main.go:141] libmachine: (addons-384227)     </disk>
	I0717 00:24:27.996571   13048 main.go:141] libmachine: (addons-384227)     <interface type='network'>
	I0717 00:24:27.996577   13048 main.go:141] libmachine: (addons-384227)       <source network='mk-addons-384227'/>
	I0717 00:24:27.996583   13048 main.go:141] libmachine: (addons-384227)       <model type='virtio'/>
	I0717 00:24:27.996588   13048 main.go:141] libmachine: (addons-384227)     </interface>
	I0717 00:24:27.996597   13048 main.go:141] libmachine: (addons-384227)     <interface type='network'>
	I0717 00:24:27.996622   13048 main.go:141] libmachine: (addons-384227)       <source network='default'/>
	I0717 00:24:27.996638   13048 main.go:141] libmachine: (addons-384227)       <model type='virtio'/>
	I0717 00:24:27.996646   13048 main.go:141] libmachine: (addons-384227)     </interface>
	I0717 00:24:27.996651   13048 main.go:141] libmachine: (addons-384227)     <serial type='pty'>
	I0717 00:24:27.996670   13048 main.go:141] libmachine: (addons-384227)       <target port='0'/>
	I0717 00:24:27.996677   13048 main.go:141] libmachine: (addons-384227)     </serial>
	I0717 00:24:27.996683   13048 main.go:141] libmachine: (addons-384227)     <console type='pty'>
	I0717 00:24:27.996690   13048 main.go:141] libmachine: (addons-384227)       <target type='serial' port='0'/>
	I0717 00:24:27.996694   13048 main.go:141] libmachine: (addons-384227)     </console>
	I0717 00:24:27.996699   13048 main.go:141] libmachine: (addons-384227)     <rng model='virtio'>
	I0717 00:24:27.996705   13048 main.go:141] libmachine: (addons-384227)       <backend model='random'>/dev/random</backend>
	I0717 00:24:27.996714   13048 main.go:141] libmachine: (addons-384227)     </rng>
	I0717 00:24:27.996719   13048 main.go:141] libmachine: (addons-384227)     
	I0717 00:24:27.996728   13048 main.go:141] libmachine: (addons-384227)     
	I0717 00:24:27.996733   13048 main.go:141] libmachine: (addons-384227)   </devices>
	I0717 00:24:27.996742   13048 main.go:141] libmachine: (addons-384227) </domain>
	I0717 00:24:27.996768   13048 main.go:141] libmachine: (addons-384227) 
	I0717 00:24:28.002420   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:b7:99:98 in network default
	I0717 00:24:28.003003   13048 main.go:141] libmachine: (addons-384227) Ensuring networks are active...
	I0717 00:24:28.003032   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:28.003657   13048 main.go:141] libmachine: (addons-384227) Ensuring network default is active
	I0717 00:24:28.004034   13048 main.go:141] libmachine: (addons-384227) Ensuring network mk-addons-384227 is active
	I0717 00:24:28.004417   13048 main.go:141] libmachine: (addons-384227) Getting domain xml...
	I0717 00:24:28.004980   13048 main.go:141] libmachine: (addons-384227) Creating domain...
	I0717 00:24:29.389775   13048 main.go:141] libmachine: (addons-384227) Waiting to get IP...
	I0717 00:24:29.390561   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:29.391065   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:29.391118   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:29.391050   13070 retry.go:31] will retry after 246.233745ms: waiting for machine to come up
	I0717 00:24:29.638672   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:29.639131   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:29.639158   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:29.639093   13070 retry.go:31] will retry after 350.230795ms: waiting for machine to come up
	I0717 00:24:29.990458   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:29.991013   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:29.991042   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:29.990942   13070 retry.go:31] will retry after 464.494549ms: waiting for machine to come up
	I0717 00:24:30.456415   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:30.456893   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:30.456921   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:30.456845   13070 retry.go:31] will retry after 483.712506ms: waiting for machine to come up
	I0717 00:24:30.942564   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:30.942961   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:30.942993   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:30.942937   13070 retry.go:31] will retry after 746.760134ms: waiting for machine to come up
	I0717 00:24:31.691082   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:31.691522   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:31.691551   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:31.691473   13070 retry.go:31] will retry after 656.464877ms: waiting for machine to come up
	I0717 00:24:32.349740   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:32.350212   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:32.350238   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:32.350143   13070 retry.go:31] will retry after 719.273391ms: waiting for machine to come up
	I0717 00:24:33.070976   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:33.071423   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:33.071445   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:33.071382   13070 retry.go:31] will retry after 1.002819649s: waiting for machine to come up
	I0717 00:24:34.075655   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:34.076036   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:34.076077   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:34.076003   13070 retry.go:31] will retry after 1.361490363s: waiting for machine to come up
	I0717 00:24:35.439381   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:35.439871   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:35.439892   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:35.439830   13070 retry.go:31] will retry after 1.488511708s: waiting for machine to come up
	I0717 00:24:36.930494   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:36.930990   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:36.931019   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:36.930923   13070 retry.go:31] will retry after 2.689620809s: waiting for machine to come up
	I0717 00:24:39.623559   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:39.624033   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:39.624062   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:39.623976   13070 retry.go:31] will retry after 3.048939201s: waiting for machine to come up
	I0717 00:24:42.674622   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:42.675028   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:42.675052   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:42.674938   13070 retry.go:31] will retry after 3.06125912s: waiting for machine to come up
	I0717 00:24:45.739956   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:45.740374   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:45.740395   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:45.740329   13070 retry.go:31] will retry after 3.704664568s: waiting for machine to come up
	I0717 00:24:49.447678   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.448178   13048 main.go:141] libmachine: (addons-384227) Found IP for machine: 192.168.39.177
	I0717 00:24:49.448194   13048 main.go:141] libmachine: (addons-384227) Reserving static IP address...
	I0717 00:24:49.448202   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has current primary IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.448622   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find host DHCP lease matching {name: "addons-384227", mac: "52:54:00:88:64:cd", ip: "192.168.39.177"} in network mk-addons-384227
	I0717 00:24:49.519355   13048 main.go:141] libmachine: (addons-384227) DBG | Getting to WaitForSSH function...
	I0717 00:24:49.519436   13048 main.go:141] libmachine: (addons-384227) Reserved static IP address: 192.168.39.177
	I0717 00:24:49.519487   13048 main.go:141] libmachine: (addons-384227) Waiting for SSH to be available...
	I0717 00:24:49.521718   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.522182   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:49.522213   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.522309   13048 main.go:141] libmachine: (addons-384227) DBG | Using SSH client type: external
	I0717 00:24:49.522335   13048 main.go:141] libmachine: (addons-384227) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa (-rw-------)
	I0717 00:24:49.522379   13048 main.go:141] libmachine: (addons-384227) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:24:49.522400   13048 main.go:141] libmachine: (addons-384227) DBG | About to run SSH command:
	I0717 00:24:49.522411   13048 main.go:141] libmachine: (addons-384227) DBG | exit 0
	I0717 00:24:49.650725   13048 main.go:141] libmachine: (addons-384227) DBG | SSH cmd err, output: <nil>: 
	I0717 00:24:49.650974   13048 main.go:141] libmachine: (addons-384227) KVM machine creation complete!
	I0717 00:24:49.651284   13048 main.go:141] libmachine: (addons-384227) Calling .GetConfigRaw
	I0717 00:24:49.651805   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:49.651997   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:49.652159   13048 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:24:49.652174   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:24:49.653307   13048 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:24:49.653321   13048 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:24:49.653326   13048 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:24:49.653331   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:49.655423   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.655740   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:49.655777   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.655869   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:49.656033   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.656178   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.656284   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:49.656443   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:49.656628   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:49.656639   13048 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:24:49.753698   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:24:49.753716   13048 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:24:49.753724   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:49.756257   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.756672   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:49.756695   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.756868   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:49.757058   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.757212   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.757326   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:49.757527   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:49.757691   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:49.757700   13048 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:24:49.859259   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:24:49.859361   13048 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:24:49.859379   13048 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:24:49.859391   13048 main.go:141] libmachine: (addons-384227) Calling .GetMachineName
	I0717 00:24:49.859626   13048 buildroot.go:166] provisioning hostname "addons-384227"
	I0717 00:24:49.859651   13048 main.go:141] libmachine: (addons-384227) Calling .GetMachineName
	I0717 00:24:49.859802   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:49.862892   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.863299   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:49.863323   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.863481   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:49.863672   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.863801   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.863922   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:49.864083   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:49.864301   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:49.864319   13048 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-384227 && echo "addons-384227" | sudo tee /etc/hostname
	I0717 00:24:49.976533   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-384227
	
	I0717 00:24:49.976554   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:49.979356   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.979659   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:49.979685   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.979838   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:49.980027   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.980210   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.980319   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:49.980478   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:49.980626   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:49.980641   13048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-384227' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-384227/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-384227' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:24:50.087029   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:24:50.087062   13048 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 00:24:50.087081   13048 buildroot.go:174] setting up certificates
	I0717 00:24:50.087101   13048 provision.go:84] configureAuth start
	I0717 00:24:50.087112   13048 main.go:141] libmachine: (addons-384227) Calling .GetMachineName
	I0717 00:24:50.087355   13048 main.go:141] libmachine: (addons-384227) Calling .GetIP
	I0717 00:24:50.090271   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.090619   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.090645   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.090775   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.092710   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.093092   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.093120   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.093222   13048 provision.go:143] copyHostCerts
	I0717 00:24:50.093306   13048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 00:24:50.093444   13048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 00:24:50.093512   13048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 00:24:50.093569   13048 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.addons-384227 san=[127.0.0.1 192.168.39.177 addons-384227 localhost minikube]
	I0717 00:24:50.245507   13048 provision.go:177] copyRemoteCerts
	I0717 00:24:50.245576   13048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:24:50.245604   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.248299   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.248595   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.248618   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.248802   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.248980   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.249124   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.249255   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:24:50.329304   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:24:50.353218   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:24:50.375853   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:24:50.398480   13048 provision.go:87] duration metric: took 311.36337ms to configureAuth
	I0717 00:24:50.398514   13048 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:24:50.398719   13048 config.go:182] Loaded profile config "addons-384227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:24:50.398799   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.401391   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.401699   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.401721   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.402060   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.402245   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.402435   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.402587   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.402737   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:50.402890   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:50.402904   13048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:24:50.657068   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:24:50.657100   13048 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:24:50.657110   13048 main.go:141] libmachine: (addons-384227) Calling .GetURL
	I0717 00:24:50.658487   13048 main.go:141] libmachine: (addons-384227) DBG | Using libvirt version 6000000
	I0717 00:24:50.660679   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.660935   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.660964   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.661150   13048 main.go:141] libmachine: Docker is up and running!
	I0717 00:24:50.661166   13048 main.go:141] libmachine: Reticulating splines...
	I0717 00:24:50.661172   13048 client.go:171] duration metric: took 23.500775223s to LocalClient.Create
	I0717 00:24:50.661194   13048 start.go:167] duration metric: took 23.500838094s to libmachine.API.Create "addons-384227"
	I0717 00:24:50.661212   13048 start.go:293] postStartSetup for "addons-384227" (driver="kvm2")
	I0717 00:24:50.661223   13048 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:24:50.661245   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:50.661478   13048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:24:50.661500   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.663584   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.663952   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.663983   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.664123   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.664293   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.664440   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.664575   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:24:50.745266   13048 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:24:50.749501   13048 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:24:50.749526   13048 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 00:24:50.749591   13048 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 00:24:50.749612   13048 start.go:296] duration metric: took 88.394917ms for postStartSetup
	I0717 00:24:50.749641   13048 main.go:141] libmachine: (addons-384227) Calling .GetConfigRaw
	I0717 00:24:50.750313   13048 main.go:141] libmachine: (addons-384227) Calling .GetIP
	I0717 00:24:50.752448   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.752927   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.752954   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.753237   13048 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/config.json ...
	I0717 00:24:50.753417   13048 start.go:128] duration metric: took 23.610580206s to createHost
	I0717 00:24:50.753438   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.755334   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.755581   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.755606   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.755731   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.755908   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.756053   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.756169   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.756303   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:50.756507   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:50.756520   13048 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:24:50.855130   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721175890.830052569
	
	I0717 00:24:50.855157   13048 fix.go:216] guest clock: 1721175890.830052569
	I0717 00:24:50.855164   13048 fix.go:229] Guest: 2024-07-17 00:24:50.830052569 +0000 UTC Remote: 2024-07-17 00:24:50.753429482 +0000 UTC m=+23.711520667 (delta=76.623087ms)
	I0717 00:24:50.855200   13048 fix.go:200] guest clock delta is within tolerance: 76.623087ms
	I0717 00:24:50.855206   13048 start.go:83] releasing machines lock for "addons-384227", held for 23.71243843s
	I0717 00:24:50.855226   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:50.855470   13048 main.go:141] libmachine: (addons-384227) Calling .GetIP
	I0717 00:24:50.857887   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.858179   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.858203   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.858307   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:50.858804   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:50.858968   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:50.859055   13048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:24:50.859100   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.859133   13048 ssh_runner.go:195] Run: cat /version.json
	I0717 00:24:50.859153   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.861628   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.861864   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.862042   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.862068   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.862196   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.862207   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.862219   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.862338   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.862421   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.862508   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.862595   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.862664   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.862743   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:24:50.862766   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:24:50.962290   13048 ssh_runner.go:195] Run: systemctl --version
	I0717 00:24:50.968296   13048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:24:51.126060   13048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:24:51.132798   13048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:24:51.132862   13048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:24:51.148988   13048 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:24:51.149013   13048 start.go:495] detecting cgroup driver to use...
	I0717 00:24:51.149072   13048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:24:51.165585   13048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:24:51.178934   13048 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:24:51.179047   13048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:24:51.193373   13048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:24:51.207755   13048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:24:51.325012   13048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:24:51.477331   13048 docker.go:233] disabling docker service ...
	I0717 00:24:51.477390   13048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:24:51.491571   13048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:24:51.504024   13048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:24:51.615884   13048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:24:51.727827   13048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:24:51.741500   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:24:51.759822   13048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:24:51.759883   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.769890   13048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:24:51.769959   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.779866   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.789639   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.799615   13048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:24:51.809757   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.819292   13048 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.836347   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.846423   13048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:24:51.855588   13048 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:24:51.855639   13048 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:24:51.869161   13048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:24:51.879221   13048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:24:52.004837   13048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:24:52.145394   13048 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:24:52.145489   13048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:24:52.150717   13048 start.go:563] Will wait 60s for crictl version
	I0717 00:24:52.150783   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:24:52.154425   13048 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:24:52.192719   13048 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:24:52.192911   13048 ssh_runner.go:195] Run: crio --version
	I0717 00:24:52.221078   13048 ssh_runner.go:195] Run: crio --version
	I0717 00:24:52.251518   13048 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:24:52.252872   13048 main.go:141] libmachine: (addons-384227) Calling .GetIP
	I0717 00:24:52.255559   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:52.255913   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:52.255944   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:52.256189   13048 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:24:52.260455   13048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:24:52.273283   13048 kubeadm.go:883] updating cluster {Name:addons-384227 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-384227 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:24:52.273390   13048 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:24:52.273430   13048 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:24:52.312412   13048 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 00:24:52.312475   13048 ssh_runner.go:195] Run: which lz4
	I0717 00:24:52.316511   13048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 00:24:52.320888   13048 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 00:24:52.320913   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 00:24:53.634306   13048 crio.go:462] duration metric: took 1.317846548s to copy over tarball
	I0717 00:24:53.634376   13048 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 00:24:55.850447   13048 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216037589s)
	I0717 00:24:55.850478   13048 crio.go:469] duration metric: took 2.216140314s to extract the tarball
	I0717 00:24:55.850486   13048 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 00:24:55.887433   13048 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:24:55.930501   13048 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:24:55.930529   13048 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:24:55.930538   13048 kubeadm.go:934] updating node { 192.168.39.177 8443 v1.30.2 crio true true} ...
	I0717 00:24:55.930658   13048 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-384227 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-384227 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:24:55.930723   13048 ssh_runner.go:195] Run: crio config
	I0717 00:24:55.979197   13048 cni.go:84] Creating CNI manager for ""
	I0717 00:24:55.979216   13048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:24:55.979225   13048 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:24:55.979246   13048 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-384227 NodeName:addons-384227 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:24:55.979393   13048 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-384227"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:24:55.979456   13048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:24:55.989837   13048 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:24:55.989927   13048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 00:24:55.999930   13048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 00:24:56.016561   13048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:24:56.033358   13048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 00:24:56.051114   13048 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0717 00:24:56.055034   13048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:24:56.067791   13048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:24:56.174091   13048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:24:56.190746   13048 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227 for IP: 192.168.39.177
	I0717 00:24:56.190775   13048 certs.go:194] generating shared ca certs ...
	I0717 00:24:56.190795   13048 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.190955   13048 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 00:24:56.326933   13048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt ...
	I0717 00:24:56.326958   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt: {Name:mk258a46a5713f26153e605f2d884d6e7ef80003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.327105   13048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key ...
	I0717 00:24:56.327116   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key: {Name:mk9083a7e0fe98917431b3190905867364dd8b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.327182   13048 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 00:24:56.473376   13048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt ...
	I0717 00:24:56.473416   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt: {Name:mka28ea6d0f65a1c140504565547138f6126280c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.473594   13048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key ...
	I0717 00:24:56.473606   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key: {Name:mkddc4a44c93a52e6572635130020cbccf1d61b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.473690   13048 certs.go:256] generating profile certs ...
	I0717 00:24:56.473746   13048 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.key
	I0717 00:24:56.473760   13048 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt with IP's: []
	I0717 00:24:56.660112   13048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt ...
	I0717 00:24:56.660142   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: {Name:mk6b65975ff55efb4753dd731d23404a51ffe89a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.660302   13048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.key ...
	I0717 00:24:56.660314   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.key: {Name:mk258e4fb88472f01219677da00429ea5fea7295 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.660402   13048 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key.b2f88573
	I0717 00:24:56.660422   13048 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt.b2f88573 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.177]
	I0717 00:24:56.843116   13048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt.b2f88573 ...
	I0717 00:24:56.843153   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt.b2f88573: {Name:mk59264558e76f88ee226559537379da65256757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.843329   13048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key.b2f88573 ...
	I0717 00:24:56.843349   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key.b2f88573: {Name:mkb79f9f557ee7bdd6e95f63f8999c69aee180ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.843443   13048 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt.b2f88573 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt
	I0717 00:24:56.843528   13048 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key.b2f88573 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key
	I0717 00:24:56.843594   13048 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.key
	I0717 00:24:56.843620   13048 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.crt with IP's: []
	I0717 00:24:57.081780   13048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.crt ...
	I0717 00:24:57.081810   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.crt: {Name:mkf5b9bb5210d2ce6aac943985403366d774267a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:57.081976   13048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.key ...
	I0717 00:24:57.081986   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.key: {Name:mk525dd28bed580f969ad9baa95ea678f3eb2f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:57.082138   13048 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:24:57.082169   13048 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:24:57.082193   13048 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:24:57.082219   13048 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 00:24:57.082801   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:24:57.108369   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:24:57.133136   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:24:57.156947   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:24:57.180048   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 00:24:57.204784   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:24:57.228054   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:24:57.251721   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:24:57.274628   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:24:57.297347   13048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:24:57.313941   13048 ssh_runner.go:195] Run: openssl version
	I0717 00:24:57.319886   13048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:24:57.330817   13048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:24:57.335303   13048 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:24:57.335345   13048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:24:57.341016   13048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:24:57.351908   13048 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:24:57.356053   13048 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:24:57.356106   13048 kubeadm.go:392] StartCluster: {Name:addons-384227 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-384227 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:24:57.356184   13048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:24:57.356220   13048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:24:57.392335   13048 cri.go:89] found id: ""
	I0717 00:24:57.392408   13048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:24:57.402614   13048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:24:57.412405   13048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:24:57.422095   13048 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:24:57.422116   13048 kubeadm.go:157] found existing configuration files:
	
	I0717 00:24:57.422159   13048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:24:57.431446   13048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:24:57.431519   13048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:24:57.441548   13048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:24:57.450694   13048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:24:57.450747   13048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:24:57.460084   13048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:24:57.469608   13048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:24:57.469664   13048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:24:57.481080   13048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:24:57.490303   13048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:24:57.490360   13048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:24:57.500244   13048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 00:24:57.556577   13048 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:24:57.556638   13048 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:24:57.701465   13048 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:24:57.701628   13048 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:24:57.701770   13048 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:24:57.946856   13048 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:24:58.078249   13048 out.go:204]   - Generating certificates and keys ...
	I0717 00:24:58.078372   13048 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:24:58.078464   13048 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:24:58.078566   13048 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:24:58.156168   13048 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:24:58.441296   13048 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:24:58.557821   13048 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:24:58.810280   13048 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:24:58.810427   13048 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-384227 localhost] and IPs [192.168.39.177 127.0.0.1 ::1]
	I0717 00:24:59.009271   13048 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:24:59.009417   13048 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-384227 localhost] and IPs [192.168.39.177 127.0.0.1 ::1]
	I0717 00:24:59.082328   13048 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:24:59.230252   13048 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:24:59.332311   13048 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:24:59.332849   13048 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:24:59.976606   13048 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:25:00.196447   13048 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:25:00.287327   13048 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:25:00.455814   13048 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:25:00.541348   13048 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:25:00.542004   13048 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:25:00.544302   13048 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:25:00.546723   13048 out.go:204]   - Booting up control plane ...
	I0717 00:25:00.546804   13048 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:25:00.546870   13048 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:25:00.546927   13048 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:25:00.561864   13048 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:25:00.562092   13048 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:25:00.562139   13048 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:25:00.683809   13048 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:25:00.683900   13048 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:25:01.185217   13048 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.080076ms
	I0717 00:25:01.185301   13048 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:25:06.186102   13048 kubeadm.go:310] [api-check] The API server is healthy after 5.001614333s
	I0717 00:25:06.201565   13048 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:25:06.216098   13048 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:25:06.240498   13048 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:25:06.240662   13048 kubeadm.go:310] [mark-control-plane] Marking the node addons-384227 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:25:06.253199   13048 kubeadm.go:310] [bootstrap-token] Using token: 28ri84.7ntcu425oc9olq2s
	I0717 00:25:06.254546   13048 out.go:204]   - Configuring RBAC rules ...
	I0717 00:25:06.254665   13048 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:25:06.259866   13048 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:25:06.274669   13048 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:25:06.279051   13048 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:25:06.283669   13048 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:25:06.288523   13048 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:25:06.594335   13048 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:25:07.032027   13048 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:25:07.594254   13048 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:25:07.595354   13048 kubeadm.go:310] 
	I0717 00:25:07.595428   13048 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:25:07.595438   13048 kubeadm.go:310] 
	I0717 00:25:07.595515   13048 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:25:07.595524   13048 kubeadm.go:310] 
	I0717 00:25:07.595574   13048 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:25:07.595638   13048 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:25:07.595709   13048 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:25:07.595724   13048 kubeadm.go:310] 
	I0717 00:25:07.595772   13048 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:25:07.595782   13048 kubeadm.go:310] 
	I0717 00:25:07.595821   13048 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:25:07.595832   13048 kubeadm.go:310] 
	I0717 00:25:07.595875   13048 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:25:07.595934   13048 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:25:07.596009   13048 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:25:07.596020   13048 kubeadm.go:310] 
	I0717 00:25:07.596119   13048 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:25:07.596215   13048 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:25:07.596222   13048 kubeadm.go:310] 
	I0717 00:25:07.596291   13048 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 28ri84.7ntcu425oc9olq2s \
	I0717 00:25:07.596370   13048 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 \
	I0717 00:25:07.596388   13048 kubeadm.go:310] 	--control-plane 
	I0717 00:25:07.596404   13048 kubeadm.go:310] 
	I0717 00:25:07.596509   13048 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:25:07.596518   13048 kubeadm.go:310] 
	I0717 00:25:07.596623   13048 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 28ri84.7ntcu425oc9olq2s \
	I0717 00:25:07.596734   13048 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 
	I0717 00:25:07.597395   13048 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:25:07.597529   13048 cni.go:84] Creating CNI manager for ""
	I0717 00:25:07.597548   13048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:25:07.599503   13048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 00:25:07.600867   13048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 00:25:07.611291   13048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 00:25:07.630143   13048 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:25:07.630236   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:07.630277   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-384227 minikube.k8s.io/updated_at=2024_07_17T00_25_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=addons-384227 minikube.k8s.io/primary=true
	I0717 00:25:07.650778   13048 ops.go:34] apiserver oom_adj: -16
	I0717 00:25:07.768304   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:08.268738   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:08.768568   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:09.269088   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:09.769123   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:10.268728   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:10.768660   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:11.269078   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:11.769225   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:12.268470   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:12.768544   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:13.268891   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:13.769236   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:14.268636   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:14.768980   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:15.268523   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:15.769312   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:16.269217   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:16.768519   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:17.268824   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:17.769201   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:18.269301   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:18.768466   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:19.268404   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:19.768635   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:20.268990   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:20.351678   13048 kubeadm.go:1113] duration metric: took 12.721491008s to wait for elevateKubeSystemPrivileges
	I0717 00:25:20.351718   13048 kubeadm.go:394] duration metric: took 22.995616848s to StartCluster
	I0717 00:25:20.351739   13048 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:25:20.351864   13048 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:25:20.352239   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:25:20.352409   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:25:20.352434   13048 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:25:20.352492   13048 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 00:25:20.352590   13048 addons.go:69] Setting yakd=true in profile "addons-384227"
	I0717 00:25:20.352619   13048 addons.go:234] Setting addon yakd=true in "addons-384227"
	I0717 00:25:20.352621   13048 addons.go:69] Setting inspektor-gadget=true in profile "addons-384227"
	I0717 00:25:20.352630   13048 addons.go:69] Setting gcp-auth=true in profile "addons-384227"
	I0717 00:25:20.352645   13048 addons.go:69] Setting storage-provisioner=true in profile "addons-384227"
	I0717 00:25:20.352658   13048 mustload.go:65] Loading cluster: addons-384227
	I0717 00:25:20.352660   13048 addons.go:234] Setting addon inspektor-gadget=true in "addons-384227"
	I0717 00:25:20.352671   13048 addons.go:234] Setting addon storage-provisioner=true in "addons-384227"
	I0717 00:25:20.352681   13048 addons.go:69] Setting ingress=true in profile "addons-384227"
	I0717 00:25:20.352691   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.352696   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.352631   13048 config.go:182] Loaded profile config "addons-384227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:25:20.352688   13048 addons.go:69] Setting helm-tiller=true in profile "addons-384227"
	I0717 00:25:20.352709   13048 addons.go:234] Setting addon ingress=true in "addons-384227"
	I0717 00:25:20.352727   13048 addons.go:234] Setting addon helm-tiller=true in "addons-384227"
	I0717 00:25:20.352740   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.352757   13048 addons.go:69] Setting ingress-dns=true in profile "addons-384227"
	I0717 00:25:20.352769   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.352777   13048 addons.go:234] Setting addon ingress-dns=true in "addons-384227"
	I0717 00:25:20.352798   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.352853   13048 config.go:182] Loaded profile config "addons-384227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:25:20.353113   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353119   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353125   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353131   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353150   13048 addons.go:69] Setting metrics-server=true in profile "addons-384227"
	I0717 00:25:20.353151   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353160   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353159   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353169   13048 addons.go:234] Setting addon metrics-server=true in "addons-384227"
	I0717 00:25:20.353189   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353193   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353203   13048 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-384227"
	I0717 00:25:20.353152   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353223   13048 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-384227"
	I0717 00:25:20.353228   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353239   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353191   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353247   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353257   13048 addons.go:69] Setting volcano=true in profile "addons-384227"
	I0717 00:25:20.353277   13048 addons.go:234] Setting addon volcano=true in "addons-384227"
	I0717 00:25:20.353298   13048 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-384227"
	I0717 00:25:20.353314   13048 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-384227"
	I0717 00:25:20.353399   13048 addons.go:69] Setting volumesnapshots=true in profile "addons-384227"
	I0717 00:25:20.353428   13048 addons.go:234] Setting addon volumesnapshots=true in "addons-384227"
	I0717 00:25:20.353447   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353481   13048 addons.go:69] Setting registry=true in profile "addons-384227"
	I0717 00:25:20.353503   13048 addons.go:234] Setting addon registry=true in "addons-384227"
	I0717 00:25:20.353539   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353543   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353563   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353566   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353579   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.352696   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353625   13048 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-384227"
	I0717 00:25:20.353636   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353640   13048 addons.go:69] Setting default-storageclass=true in profile "addons-384227"
	I0717 00:25:20.353653   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353659   13048 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-384227"
	I0717 00:25:20.353662   13048 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-384227"
	I0717 00:25:20.353669   13048 addons.go:69] Setting cloud-spanner=true in profile "addons-384227"
	I0717 00:25:20.353684   13048 addons.go:234] Setting addon cloud-spanner=true in "addons-384227"
	I0717 00:25:20.353893   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353902   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353917   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353925   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353924   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353903   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353978   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.354012   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.354046   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.354019   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.354238   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.354256   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.354261   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.354468   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.354485   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.354615   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.354635   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.354833   13048 out.go:177] * Verifying Kubernetes components...
	I0717 00:25:20.365249   13048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:25:20.380791   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0717 00:25:20.380948   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0717 00:25:20.381021   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0717 00:25:20.381085   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33701
	I0717 00:25:20.381428   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.381554   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.382087   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.382112   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.382250   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.382272   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.382336   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.382413   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.382661   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.382838   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.382855   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.382990   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.382990   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.383043   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.383181   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.383243   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.383265   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.383436   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.383568   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.383607   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.384973   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I0717 00:25:20.385296   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.385621   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.385852   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.386188   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.386218   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.389096   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.389118   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.389351   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.389394   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.389557   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.390053   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.390087   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.407535   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0717 00:25:20.408107   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.408707   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.408727   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.409122   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.409677   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.409718   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.412899   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0717 00:25:20.413385   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.413953   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.413972   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.414362   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.414925   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.414969   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.415738   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I0717 00:25:20.416186   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.416640   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.416665   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.417035   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.417639   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.417705   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.419965   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33031
	I0717 00:25:20.420451   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.421057   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.421084   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.421472   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.421692   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.424198   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0717 00:25:20.424738   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.425293   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0717 00:25:20.425588   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.425606   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.426080   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.426318   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.426570   13048 addons.go:234] Setting addon default-storageclass=true in "addons-384227"
	I0717 00:25:20.426614   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.427922   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.427961   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.429751   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.430954   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.430973   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.431474   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.431803   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.434542   13048 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-384227"
	I0717 00:25:20.434608   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.434985   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.435046   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.436490   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0717 00:25:20.436617   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I0717 00:25:20.437000   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0717 00:25:20.437327   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.437407   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.437855   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.437877   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.437965   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.438358   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.438375   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.438422   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.439077   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.439111   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.439428   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.439442   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.439514   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38663
	I0717 00:25:20.439949   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.440383   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0717 00:25:20.440480   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.440488   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.440510   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.441149   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.441165   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.441224   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.441288   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45205
	I0717 00:25:20.441727   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.441846   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.441857   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.442055   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.442686   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.443194   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.443255   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.444545   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.445197   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36181
	I0717 00:25:20.445562   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.445858   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.446292   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0717 00:25:20.446579   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.446594   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.446668   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.446760   13048 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:25:20.446951   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.446973   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.447045   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.447536   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.447578   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.447790   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.447952   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.447971   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.448285   13048 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:25:20.448305   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:25:20.448324   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.448431   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0717 00:25:20.448522   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.448550   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.448770   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.448826   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.448878   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.449193   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.451076   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.452051   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.452748   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.453016   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.453559   13048 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 00:25:20.453611   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.453632   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.453815   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.453959   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.454112   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.454260   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.454501   13048 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 00:25:20.454696   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.454712   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.455530   13048 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:25:20.455635   13048 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 00:25:20.455650   13048 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 00:25:20.455668   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.456158   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.457649   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.457862   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.457907   13048 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:25:20.459223   13048 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:25:20.459243   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 00:25:20.459261   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.459594   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.461815   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33925
	I0717 00:25:20.462242   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.462264   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.462279   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40679
	I0717 00:25:20.462738   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.466083   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.466103   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.466108   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.466117   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.466135   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.466089   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.466273   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.466322   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.466456   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.466512   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.466670   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.466683   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.466774   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.466788   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.467265   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.467288   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.467657   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.467661   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.467853   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.468274   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.468298   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.470618   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.471095   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I0717 00:25:20.472654   13048 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 00:25:20.474037   13048 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:25:20.474048   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.474058   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 00:25:20.474078   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.475268   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.475291   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.476067   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.476425   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.477839   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0717 00:25:20.477895   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.478224   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.478421   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.478442   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.478798   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.479011   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.479026   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.479089   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.479250   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.479373   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.479435   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.479858   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.480245   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.480314   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0717 00:25:20.480803   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.481544   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.481568   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.481924   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.482078   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.482715   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0717 00:25:20.483109   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.483336   13048 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 00:25:20.483471   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.483590   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.483604   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.483981   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.484116   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.484781   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.484935   13048 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:25:20.484955   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 00:25:20.484972   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.486398   13048 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 00:25:20.486463   13048 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 00:25:20.487380   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.487670   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:20.487689   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:20.487725   13048 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 00:25:20.487745   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 00:25:20.487769   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.487845   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:20.487859   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:20.487867   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:20.487897   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:20.489048   13048 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 00:25:20.490114   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:20.490132   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:20.490245   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	W0717 00:25:20.490298   13048 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 00:25:20.490304   13048 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 00:25:20.490319   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 00:25:20.490344   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.491475   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.491935   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.491973   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.492527   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.492907   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.492940   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.493725   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.493685   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.493895   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.494213   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.494230   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.494262   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.494312   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.494325   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.494353   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0717 00:25:20.494504   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I0717 00:25:20.494628   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.494789   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.494803   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.494837   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.495135   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.495187   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.495227   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.495241   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.495304   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39575
	I0717 00:25:20.495322   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.495334   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.495558   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.495800   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.495861   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.495875   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.495919   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.496005   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.496299   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.496314   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.496636   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.496657   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.496921   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.497640   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.497667   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.499124   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.499134   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0717 00:25:20.499127   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46743
	I0717 00:25:20.499484   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.499634   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.499775   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 00:25:20.499794   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.499944   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.499959   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.500128   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.500261   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.500446   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.500599   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.501012   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.501588   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.501614   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.501741   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 00:25:20.501765   13048 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 00:25:20.501773   13048 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 00:25:20.501749   13048 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 00:25:20.501865   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.501858   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.503361   13048 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 00:25:20.503378   13048 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 00:25:20.503386   13048 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 00:25:20.503396   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.504504   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.504888   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.504907   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.505070   13048 out.go:177]   - Using image docker.io/busybox:stable
	I0717 00:25:20.505123   13048 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 00:25:20.505140   13048 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 00:25:20.505155   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.505163   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.505343   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.505489   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.505619   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.506402   13048 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:25:20.506417   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 00:25:20.506431   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.506925   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.507413   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.507447   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.507670   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.507858   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.508039   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.508203   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.510071   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.510135   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0717 00:25:20.510469   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.510511   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.510713   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.510761   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.510919   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.510953   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.511128   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.511241   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.511492   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.511502   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.511522   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.511547   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.511790   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.511880   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.511920   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.511979   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.512447   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.512587   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.513374   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	W0717 00:25:20.513546   13048 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34920->192.168.39.177:22: read: connection reset by peer
	I0717 00:25:20.513574   13048 retry.go:31] will retry after 305.964808ms: ssh: handshake failed: read tcp 192.168.39.1:34920->192.168.39.177:22: read: connection reset by peer
	I0717 00:25:20.515164   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 00:25:20.516412   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 00:25:20.517491   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 00:25:20.518484   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33995
	I0717 00:25:20.518900   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.519273   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.519288   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.519591   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 00:25:20.519608   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.519773   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.521338   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.521682   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 00:25:20.522945   13048 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 00:25:20.523123   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I0717 00:25:20.523492   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.523957   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.523980   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.524031   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 00:25:20.524175   13048 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 00:25:20.524187   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 00:25:20.524202   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.524323   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.524505   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.526344   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 00:25:20.527401   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.527449   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.527621   13048 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:25:20.527634   13048 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:25:20.527648   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.527867   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.527971   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.528149   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.528325   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.528516   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.528715   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.529116   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W0717 00:25:20.529718   13048 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0717 00:25:20.529742   13048 retry.go:31] will retry after 172.735909ms: ssh: handshake failed: EOF
	I0717 00:25:20.530610   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.530628   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 00:25:20.530652   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 00:25:20.530669   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.531031   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.531059   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.531333   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.531513   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.532319   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.532492   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.533295   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.533679   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.533713   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	W0717 00:25:20.533844   13048 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34936->192.168.39.177:22: read: connection reset by peer
	I0717 00:25:20.533863   13048 retry.go:31] will retry after 352.184484ms: ssh: handshake failed: read tcp 192.168.39.1:34936->192.168.39.177:22: read: connection reset by peer
	I0717 00:25:20.533898   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.534085   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.534191   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.534306   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.822902   13048 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 00:25:20.822927   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 00:25:20.894827   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 00:25:20.894857   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 00:25:20.910829   13048 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 00:25:20.910849   13048 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 00:25:20.934980   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:25:20.956411   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:25:20.958647   13048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:25:20.958864   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:25:20.965990   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:25:20.975282   13048 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 00:25:20.975306   13048 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 00:25:20.986021   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:25:20.991970   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 00:25:21.010172   13048 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 00:25:21.010200   13048 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 00:25:21.038232   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:25:21.042110   13048 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 00:25:21.042129   13048 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 00:25:21.101887   13048 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 00:25:21.101910   13048 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 00:25:21.112758   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 00:25:21.112776   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 00:25:21.113860   13048 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:25:21.113878   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 00:25:21.216093   13048 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 00:25:21.216120   13048 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 00:25:21.245690   13048 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:25:21.245711   13048 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 00:25:21.251563   13048 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 00:25:21.251583   13048 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 00:25:21.343508   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:25:21.343711   13048 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:25:21.343726   13048 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 00:25:21.349046   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:25:21.389693   13048 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 00:25:21.389722   13048 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 00:25:21.394478   13048 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 00:25:21.394501   13048 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 00:25:21.419267   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 00:25:21.419292   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 00:25:21.422806   13048 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 00:25:21.422834   13048 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 00:25:21.568449   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:25:21.596920   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:25:21.707949   13048 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:25:21.707971   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 00:25:21.714315   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 00:25:21.714333   13048 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 00:25:21.719616   13048 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 00:25:21.719638   13048 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 00:25:21.735036   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 00:25:21.735063   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 00:25:21.938630   13048 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:25:21.938653   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 00:25:21.986398   13048 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 00:25:21.986420   13048 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 00:25:22.001496   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 00:25:22.001514   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 00:25:22.025502   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:25:22.132980   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:25:22.313768   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 00:25:22.313790   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 00:25:22.313975   13048 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 00:25:22.313992   13048 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 00:25:22.617590   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 00:25:22.617628   13048 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 00:25:22.859803   13048 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 00:25:22.859839   13048 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 00:25:22.919923   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 00:25:22.919951   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 00:25:23.093162   13048 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 00:25:23.093190   13048 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 00:25:23.184769   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 00:25:23.184799   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 00:25:23.374589   13048 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:25:23.374617   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 00:25:23.395197   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:25:23.395221   13048 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 00:25:23.653547   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:25:23.724231   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:25:24.783165   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.848150661s)
	I0717 00:25:24.783212   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:24.783224   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:24.783477   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:24.783540   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:24.783554   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:24.783564   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:24.783560   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:24.783825   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:24.783842   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:24.783840   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:27.462197   13048 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 00:25:27.462238   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:27.465555   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:27.466058   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:27.466079   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:27.466254   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:27.466493   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:27.466662   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:27.466829   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:27.764532   13048 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 00:25:27.869507   13048 addons.go:234] Setting addon gcp-auth=true in "addons-384227"
	I0717 00:25:27.869565   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:27.869954   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:27.869987   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:27.899063   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34837
	I0717 00:25:27.899495   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:27.899964   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:27.899980   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:27.900309   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:27.900917   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:27.900954   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:27.916188   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38763
	I0717 00:25:27.916565   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:27.917017   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:27.917042   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:27.917357   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:27.917535   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:27.919148   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:27.919371   13048 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 00:25:27.919398   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:27.922169   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:27.922542   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:27.922579   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:27.922732   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:27.922929   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:27.923107   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:27.923245   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:28.988944   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.032496463s)
	I0717 00:25:28.988972   13048 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.030301382s)
	I0717 00:25:28.989008   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989021   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989055   13048 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.030166847s)
	I0717 00:25:28.989082   13048 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 00:25:28.989123   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.023101578s)
	I0717 00:25:28.989187   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989210   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989232   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.997243468s)
	I0717 00:25:28.989266   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989283   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989317   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.989332   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.989352   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.989366   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989375   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.951120705s)
	I0717 00:25:28.989189   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.003144332s)
	I0717 00:25:28.989389   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989416   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989395   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989459   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989520   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.64598383s)
	I0717 00:25:28.989539   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989546   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989561   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.989377   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989677   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.640598022s)
	I0717 00:25:28.989696   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989703   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989783   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.421307521s)
	I0717 00:25:28.989800   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989808   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989854   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.392908157s)
	I0717 00:25:28.989866   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989873   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989933   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.96440525s)
	I0717 00:25:28.989945   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989953   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989951   13048 node_ready.go:35] waiting up to 6m0s for node "addons-384227" to be "Ready" ...
	I0717 00:25:28.990070   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.990088   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.990090   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.990103   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.990112   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.990086   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.857072963s)
	I0717 00:25:28.990133   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.990145   13048 main.go:141] libmachine: Successfully made call to close driver server
	W0717 00:25:28.990148   13048 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:25:28.990155   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.990155   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.990164   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.990164   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.990165   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.336582633s)
	I0717 00:25:28.990172   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.990174   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.990181   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.990185   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.990194   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.990223   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.990112   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.990231   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.990237   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.990239   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.990270   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.990120   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.990298   13048 addons.go:475] Verifying addon ingress=true in "addons-384227"
	I0717 00:25:28.990168   13048 retry.go:31] will retry after 284.00132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:25:28.991208   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991209   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991234   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991245   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.991253   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.991255   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991260   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.991264   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.991274   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.991282   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.991316   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991337   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991341   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991347   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991355   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.991358   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991364   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.991365   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.991372   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.991373   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.991380   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.991414   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991424   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.991567   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991588   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991595   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.993767   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.993774   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.993779   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.993793   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.993814   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.993821   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.993828   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.993835   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.993906   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.993913   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.994613   13048 out.go:177] * Verifying ingress addon...
	I0717 00:25:28.995007   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.995033   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995040   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995048   13048 addons.go:475] Verifying addon metrics-server=true in "addons-384227"
	I0717 00:25:28.995086   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.995106   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995113   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995119   13048 addons.go:475] Verifying addon registry=true in "addons-384227"
	I0717 00:25:28.995355   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.995383   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995391   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995592   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.995596   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995650   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995669   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.995685   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.995615   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995743   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995654   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995804   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995634   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.996488   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.996501   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.996812   13048 out.go:177] * Verifying registry addon...
	I0717 00:25:28.997475   13048 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 00:25:28.997861   13048 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-384227 service yakd-dashboard -n yakd-dashboard
	
	I0717 00:25:28.999391   13048 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 00:25:28.999995   13048 node_ready.go:49] node "addons-384227" has status "Ready":"True"
	I0717 00:25:29.000016   13048 node_ready.go:38] duration metric: took 10.05001ms for node "addons-384227" to be "Ready" ...
	I0717 00:25:29.000028   13048 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:25:29.051219   13048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bpp2w" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.051907   13048 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:25:29.051932   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:29.052372   13048 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 00:25:29.052388   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:29.060454   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:29.060471   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:29.060767   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:29.060784   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 00:25:29.060862   13048 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 00:25:29.063035   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:29.063053   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:29.063356   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:29.063382   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:29.063385   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:29.064591   13048 pod_ready.go:92] pod "coredns-7db6d8ff4d-bpp2w" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.064610   13048 pod_ready.go:81] duration metric: took 13.364416ms for pod "coredns-7db6d8ff4d-bpp2w" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.064635   13048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fh4r2" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.094696   13048 pod_ready.go:92] pod "coredns-7db6d8ff4d-fh4r2" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.094725   13048 pod_ready.go:81] duration metric: took 30.081212ms for pod "coredns-7db6d8ff4d-fh4r2" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.094740   13048 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.132079   13048 pod_ready.go:92] pod "etcd-addons-384227" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.132100   13048 pod_ready.go:81] duration metric: took 37.35365ms for pod "etcd-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.132111   13048 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.170006   13048 pod_ready.go:92] pod "kube-apiserver-addons-384227" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.170033   13048 pod_ready.go:81] duration metric: took 37.915847ms for pod "kube-apiserver-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.170047   13048 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.276206   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:25:29.393627   13048 pod_ready.go:92] pod "kube-controller-manager-addons-384227" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.393658   13048 pod_ready.go:81] duration metric: took 223.602495ms for pod "kube-controller-manager-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.393678   13048 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9j492" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.494299   13048 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-384227" context rescaled to 1 replicas
	I0717 00:25:29.505855   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:29.515997   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:29.794162   13048 pod_ready.go:92] pod "kube-proxy-9j492" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.794191   13048 pod_ready.go:81] duration metric: took 400.504239ms for pod "kube-proxy-9j492" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.794204   13048 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:30.041554   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:30.041647   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:30.097596   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.37331077s)
	I0717 00:25:30.097650   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:30.097667   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:30.097686   13048 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.17829151s)
	I0717 00:25:30.098045   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:30.098082   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:30.098090   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:30.098104   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:30.098113   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:30.099783   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:30.099785   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:30.099811   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:30.099821   13048 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-384227"
	I0717 00:25:30.099964   13048 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:25:30.100982   13048 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 00:25:30.102577   13048 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 00:25:30.103269   13048 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 00:25:30.103968   13048 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 00:25:30.103989   13048 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 00:25:30.120474   13048 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:25:30.120503   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:30.194198   13048 pod_ready.go:92] pod "kube-scheduler-addons-384227" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:30.194223   13048 pod_ready.go:81] duration metric: took 400.011648ms for pod "kube-scheduler-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:30.194238   13048 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:30.248705   13048 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 00:25:30.248732   13048 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 00:25:30.385027   13048 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:25:30.385053   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 00:25:30.456954   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:25:30.512041   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:30.512688   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:30.608626   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:31.000932   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:31.004739   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:31.108452   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:31.138568   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.862300475s)
	I0717 00:25:31.138624   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:31.138636   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:31.138908   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:31.138935   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:31.138943   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:31.138949   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:31.138951   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:31.139302   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:31.139315   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:31.502820   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:31.504577   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:31.617772   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:31.803119   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.34612339s)
	I0717 00:25:31.803182   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:31.803200   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:31.803446   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:31.803494   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:31.803508   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:31.803518   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:31.803527   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:31.803743   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:31.803786   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:31.803808   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:31.805691   13048 addons.go:475] Verifying addon gcp-auth=true in "addons-384227"
	I0717 00:25:31.807348   13048 out.go:177] * Verifying gcp-auth addon...
	I0717 00:25:31.809582   13048 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 00:25:31.837497   13048 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 00:25:31.837516   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:32.008940   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:32.010127   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:32.109228   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:32.200194   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:32.313045   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:32.501784   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:32.505532   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:32.610314   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:32.815001   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:33.003200   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:33.005730   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:33.110205   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:33.313858   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:33.501681   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:33.506188   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:33.608669   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:33.814351   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:34.002783   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:34.004103   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:34.109156   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:34.200719   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:34.313972   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:34.501605   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:34.504187   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:34.608916   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:34.813171   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:35.003147   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:35.005125   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:35.109055   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:35.313021   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:35.501852   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:35.504604   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:35.607790   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:35.814267   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:36.002482   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:36.004306   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:36.118271   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:36.202598   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:36.313740   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:36.501132   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:36.503581   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:36.610151   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:36.813282   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:37.002358   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:37.005136   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:37.109587   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:37.313923   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:37.501185   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:37.503295   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:37.608815   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:37.812925   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:38.001971   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:38.004461   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:38.109466   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:38.312852   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:38.501638   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:38.504052   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:38.608969   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:38.699323   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:38.814471   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:39.002477   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:39.003926   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:39.108410   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:39.313455   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:39.502266   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:39.503860   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:39.608032   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:39.812907   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:40.001962   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:40.004388   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:40.108817   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:40.313038   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:40.504355   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:40.504775   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:40.609689   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:40.700617   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:40.813553   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:41.001160   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:41.003768   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:41.108422   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:41.313382   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:41.502493   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:41.503970   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:41.608348   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:41.814019   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:42.003073   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:42.004978   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:42.108501   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:42.313690   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:42.505409   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:42.505771   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:42.609367   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:42.700801   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:42.812864   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:43.002183   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:43.005524   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:43.109692   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:43.312922   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:43.502686   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:43.505927   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:43.608870   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:43.813326   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:44.002224   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:44.004929   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:44.108226   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:44.314175   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:44.502050   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:44.509161   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:44.608670   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:44.813340   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:45.006168   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:45.006403   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:45.109087   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:45.200394   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:45.313038   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:45.502183   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:45.507604   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:45.609327   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:45.813300   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:46.002802   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:46.004805   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:46.110225   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:46.314186   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:46.501904   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:46.504141   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:46.609168   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:46.812662   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:47.001627   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:47.003747   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:47.108343   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:47.313278   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:47.503622   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:47.505716   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:47.608644   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:47.700182   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:47.813305   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:48.002776   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:48.004477   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:48.108882   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:48.313667   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:48.502012   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:48.505717   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:48.610370   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:48.812942   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:49.002813   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:49.004549   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:49.109276   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:49.313104   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:49.501896   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:49.505601   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:49.608122   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:49.815889   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:50.003886   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:50.004670   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:50.108344   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:50.203941   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:50.313323   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:50.503953   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:50.504298   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:50.608673   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:50.813428   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:51.246246   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:51.247551   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:51.251370   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:51.314173   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:51.501983   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:51.504263   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:51.611129   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:51.813447   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:52.002406   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:52.003514   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:52.114192   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:52.204796   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:52.313062   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:52.501952   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:52.504107   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:52.608634   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:52.813531   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:53.002926   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:53.004322   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:53.109233   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:53.313696   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:53.501053   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:53.503540   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:53.609184   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:53.813071   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:54.004065   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:54.004456   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:54.109025   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:54.312930   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:54.501501   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:54.510252   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:54.608363   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:54.699965   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:54.813010   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:55.002530   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:55.004481   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:55.109074   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:55.313657   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:55.503830   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:55.504802   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:55.608729   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:55.814089   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:56.000988   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:56.003333   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:56.108809   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:56.313153   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:56.502121   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:56.503269   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:56.608630   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:56.700972   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:56.814245   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:57.002815   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:57.004023   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:57.108669   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:57.313417   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:57.507676   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:57.516459   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:57.608760   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:57.813180   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:58.002576   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:58.005051   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:58.109186   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:58.313772   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:58.501578   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:58.505381   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:58.609777   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:58.813306   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:59.002578   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:59.004122   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:59.108793   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:59.200386   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:59.316082   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:59.566538   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:59.571682   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:59.609940   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:59.812589   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:00.003302   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:00.005269   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:00.108698   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:00.312778   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:00.504739   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:00.504969   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:00.608229   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:00.813076   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:01.002087   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:01.004620   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:01.108918   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:01.200590   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:01.313568   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:01.501913   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:01.503393   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:01.608463   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:01.813325   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:02.002271   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:02.003179   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:02.108688   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:02.313094   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:02.502168   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:02.503706   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:02.611320   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:02.814656   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:03.001686   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:03.004270   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:03.110028   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:03.314129   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:03.501689   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:03.508277   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:03.609007   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:03.699021   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:03.812983   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:04.001624   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:04.005136   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:04.108305   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:04.312640   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:04.501493   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:04.503947   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:04.608487   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:04.813900   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:05.001798   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:05.004766   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:05.109265   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:05.313536   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:05.502406   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:05.504062   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:05.609328   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:05.700305   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:05.813665   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:06.480764   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:06.486896   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:06.490689   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:06.494591   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:06.508147   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:06.509693   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:06.608471   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:06.812639   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:07.001791   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:07.004158   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:07.114659   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:07.313811   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:07.505106   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:07.512884   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:07.608489   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:07.702796   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:07.813642   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:08.001820   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:08.003913   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:08.108580   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:08.313596   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:08.501625   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:08.503904   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:08.611496   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:08.812754   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:09.001448   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:09.003855   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:09.108796   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:09.313826   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:09.501418   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:09.503580   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:09.608020   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:09.813189   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:10.002288   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:10.004841   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:10.108276   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:10.200259   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:10.313199   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:10.505186   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:10.505970   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:10.609423   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:10.813692   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:11.002361   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:11.004790   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:11.108849   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:11.313116   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:11.501822   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:11.504561   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:11.608682   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:11.812941   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:12.002246   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:12.003932   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:12.108867   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:12.314111   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:12.505233   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:12.511722   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:12.609768   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:12.700860   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:12.813954   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:13.001339   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:13.006450   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:13.109870   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:13.313355   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:13.502034   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:13.503895   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:13.608455   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:13.813061   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:14.005056   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:14.006006   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:14.110613   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:14.313519   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:14.502406   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:14.504382   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:14.609789   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:14.703060   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:14.812887   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:15.001751   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:15.005890   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:15.110241   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:15.312693   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:15.514250   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:15.514378   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:15.609765   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:15.813230   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:16.002609   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:16.005026   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:16.108621   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:16.313074   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:16.504561   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:16.504803   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:16.609257   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:16.813211   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:17.005116   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:17.010955   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:17.109103   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:17.200017   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:17.317342   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:17.502423   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:17.503570   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:17.609098   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:17.813148   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:18.002479   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:18.005477   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:18.109412   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:18.312802   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:18.501321   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:18.503526   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:18.609593   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:18.813762   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:19.001731   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:19.006460   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:19.108496   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:19.200195   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:19.313286   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:19.504327   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:19.512233   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:19.608880   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:19.813038   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:20.001531   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:20.003968   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:20.108049   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:20.313795   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:20.501639   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:20.504038   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:20.608844   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:20.813978   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:21.003177   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:21.007310   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:21.109491   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:21.201027   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:21.313241   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:21.502791   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:21.504941   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:21.608445   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:21.813033   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:22.002297   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:22.008912   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:22.108244   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:22.314177   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:22.504970   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:22.508032   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:22.608580   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:22.812834   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:23.002241   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:23.004208   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:23.110541   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:23.313133   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:23.503336   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:23.505160   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:23.608780   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:23.700378   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:23.813897   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:24.002060   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:24.004192   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:24.109342   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:24.313016   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:24.501987   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:24.510648   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:24.608375   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:24.813955   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:25.002900   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:25.004620   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:25.110513   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:25.313393   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:25.502449   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:25.504918   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:25.608423   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:25.702362   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:25.813211   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:26.312430   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:26.315351   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:26.316419   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:26.316835   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:26.501528   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:26.506065   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:26.610536   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:26.814100   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:27.002289   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:27.005516   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:27.108714   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:27.313066   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:27.503349   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:27.506698   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:27.608376   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:27.813311   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:28.003591   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:28.005005   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:28.109461   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:28.200700   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:28.312809   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:28.501517   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:28.503584   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:28.611086   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:28.813085   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:29.001972   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:29.004434   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:29.109443   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:29.314498   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:29.503892   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:29.507133   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:29.608769   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:29.813530   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:30.001367   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:30.003853   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:30.108094   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:30.314044   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:30.501645   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:30.504394   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:30.609803   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:30.700713   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:30.813677   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:31.001434   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:31.003656   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:31.108075   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:31.313641   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:31.506589   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:31.506914   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:31.609311   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:31.813389   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:32.002378   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:32.003890   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:32.108625   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:32.313924   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:32.501602   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:32.505921   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:32.608833   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:32.813711   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:33.001662   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:33.005985   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:33.109075   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:33.201293   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:33.313974   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:33.503107   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:33.506545   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:33.611201   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:33.813450   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:34.007509   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:34.007567   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:34.109199   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:34.313334   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:34.504341   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:34.505228   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:34.609041   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:34.813333   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:35.002207   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:35.003818   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:35.109058   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:35.313340   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:35.502668   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:35.505712   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:35.612366   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:35.700516   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:35.813722   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:36.006104   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:36.007338   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:36.108608   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:36.313682   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:36.503737   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:36.503850   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:36.609017   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:36.813528   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:37.004095   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:37.004142   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:37.111519   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:37.313188   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:37.513000   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:37.513195   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:37.608196   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:37.813218   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:38.001803   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:38.004210   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:38.108764   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:38.201229   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:38.312844   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:38.502215   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:38.504315   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:38.609199   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:38.814223   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:39.002403   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:39.003946   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:39.108234   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:39.312839   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:39.503172   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:39.504571   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:39.608575   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:39.813350   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:40.002270   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:40.004322   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:40.110451   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:40.314078   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:40.501839   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:40.505320   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:40.608868   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:40.700711   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:40.813822   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:41.001638   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:41.003695   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:41.108403   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:41.313458   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:41.502477   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:41.504193   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:41.618244   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:41.814097   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:42.004508   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:42.008093   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:42.108500   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:42.313485   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:42.510729   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:42.511417   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:42.608634   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:42.813661   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:43.001650   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:43.004172   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:43.109484   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:43.203787   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:43.314198   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:43.502078   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:43.506303   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:43.612456   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:43.814437   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:44.004044   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:44.008262   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:44.108736   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:44.314290   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:44.506631   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:44.507571   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:44.612587   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:44.813174   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:45.002588   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:45.004175   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:45.108719   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:45.317832   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:45.506025   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:45.511194   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:45.609231   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:45.701919   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:45.813213   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:46.002084   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:46.005373   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:46.108872   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:46.313626   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:46.502121   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:46.505180   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:46.609724   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:46.813536   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:47.003133   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:47.005597   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:47.109517   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:47.314320   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:47.501494   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:47.506460   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:47.608722   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:47.813034   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:48.002197   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:48.004469   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:48.109289   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:48.199668   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:48.314220   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:48.502934   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:48.508397   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:48.609133   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:48.813751   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:49.001514   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:49.003849   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:49.110866   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:49.313711   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:49.503817   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:49.505285   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:49.608294   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:49.813433   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:50.002848   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:50.005698   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:50.109218   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:50.209287   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:50.314096   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:50.506660   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:50.510577   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:50.610505   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:50.813712   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:51.001467   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:51.004258   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:51.108516   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:51.313136   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:51.501903   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:51.506779   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:51.934186   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:51.935173   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:52.003729   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:52.004824   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:52.108498   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:52.313415   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:52.504145   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:52.507819   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:52.608318   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:52.702044   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:52.813166   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:53.005680   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:53.008381   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:53.108800   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:53.317041   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:53.502997   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:53.509693   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:53.610165   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:53.813641   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:54.360658   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:54.369252   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:54.369475   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:54.369876   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:54.503317   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:54.503834   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:54.608287   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:54.813121   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:55.002794   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:55.004351   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:55.108678   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:55.201054   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:55.313060   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:55.507217   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:55.508311   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:55.609076   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:55.812854   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:56.001385   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:56.004093   13048 kapi.go:107] duration metric: took 1m27.004700692s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 00:26:56.109635   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:56.313446   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:56.503735   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:56.608943   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:56.813734   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:57.001220   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:57.109659   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:57.313425   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:57.502838   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:57.607795   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:57.701276   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:57.813835   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:58.001869   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:58.109519   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:58.312454   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:58.503846   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:58.609273   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:58.813351   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:59.004392   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:59.108498   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:59.313719   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:59.502022   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:59.619427   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:59.706189   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:59.813423   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:00.002409   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:00.111164   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:00.313425   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:00.839715   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:00.845379   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:00.845940   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:01.001909   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:01.110484   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:01.313210   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:01.501750   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:01.610055   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:01.816208   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:02.001293   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:02.109558   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:02.204455   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:02.313740   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:02.503414   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:02.608734   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:02.815104   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:03.001898   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:03.108680   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:03.313715   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:03.502106   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:03.608460   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:03.813452   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:04.002128   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:04.108270   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:04.314837   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:04.501369   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:04.608779   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:04.701921   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:04.812908   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:05.001788   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:05.109219   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:05.313418   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:05.560811   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:05.611969   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:05.812872   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:06.005048   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:06.113797   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:06.317145   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:06.504362   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:06.610069   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:06.813145   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:07.001810   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:07.108641   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:07.199796   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:07.312999   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:07.502515   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:07.613598   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:07.813266   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:08.006976   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:08.109557   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:08.313266   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:08.506313   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:08.608517   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:08.814871   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:09.001842   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:09.109239   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:09.202464   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:09.313358   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:09.502477   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:09.609588   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:09.813059   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:10.001968   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:10.108720   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:10.314103   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:10.504059   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:10.615646   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:10.813464   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:11.242433   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:11.246207   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:11.247362   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:11.313429   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:11.502794   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:11.613386   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:11.813804   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:12.001733   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:12.108317   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:12.316827   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:12.507232   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:12.612314   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:12.813470   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:13.002794   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:13.109616   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:13.313114   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:13.507474   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:13.609350   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:13.700103   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:13.813567   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:14.003135   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:14.108966   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:14.312921   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:14.502136   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:14.609119   13048 kapi.go:107] duration metric: took 1m44.505846689s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 00:27:14.813517   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:15.003127   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:15.314726   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:15.505452   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:15.700183   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:15.813841   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:16.001311   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:16.313305   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:16.503273   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:16.702912   13048 pod_ready.go:92] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"True"
	I0717 00:27:16.702935   13048 pod_ready.go:81] duration metric: took 1m46.508690063s for pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace to be "Ready" ...
	I0717 00:27:16.702945   13048 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-v6tmh" in "kube-system" namespace to be "Ready" ...
	I0717 00:27:16.718913   13048 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-v6tmh" in "kube-system" namespace has status "Ready":"True"
	I0717 00:27:16.718954   13048 pod_ready.go:81] duration metric: took 16.001721ms for pod "nvidia-device-plugin-daemonset-v6tmh" in "kube-system" namespace to be "Ready" ...
	I0717 00:27:16.718982   13048 pod_ready.go:38] duration metric: took 1m47.718938458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:27:16.719001   13048 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:27:16.719034   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:27:16.719095   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:27:16.808144   13048 cri.go:89] found id: "da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:16.808170   13048 cri.go:89] found id: ""
	I0717 00:27:16.808178   13048 logs.go:276] 1 containers: [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7]
	I0717 00:27:16.808233   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:16.812888   13048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:27:16.812940   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:27:16.817065   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:16.865201   13048 cri.go:89] found id: "b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:16.865223   13048 cri.go:89] found id: ""
	I0717 00:27:16.865231   13048 logs.go:276] 1 containers: [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791]
	I0717 00:27:16.865274   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:16.869768   13048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:27:16.869818   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:27:16.911800   13048 cri.go:89] found id: "0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:16.911819   13048 cri.go:89] found id: ""
	I0717 00:27:16.911825   13048 logs.go:276] 1 containers: [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e]
	I0717 00:27:16.911865   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:16.915970   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:27:16.916029   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:27:16.982747   13048 cri.go:89] found id: "69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:16.982771   13048 cri.go:89] found id: ""
	I0717 00:27:16.982780   13048 logs.go:276] 1 containers: [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3]
	I0717 00:27:16.982828   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:16.987111   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:27:16.987172   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:27:17.001798   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:17.035343   13048 cri.go:89] found id: "a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:17.035367   13048 cri.go:89] found id: ""
	I0717 00:27:17.035376   13048 logs.go:276] 1 containers: [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf]
	I0717 00:27:17.035420   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:17.049403   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:27:17.049469   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:27:17.093275   13048 cri.go:89] found id: "229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:17.093296   13048 cri.go:89] found id: ""
	I0717 00:27:17.093305   13048 logs.go:276] 1 containers: [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f]
	I0717 00:27:17.093361   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:17.097446   13048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:27:17.097498   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:27:17.143956   13048 cri.go:89] found id: ""
	I0717 00:27:17.143982   13048 logs.go:276] 0 containers: []
	W0717 00:27:17.143996   13048 logs.go:278] No container was found matching "kindnet"
	I0717 00:27:17.144004   13048 logs.go:123] Gathering logs for kube-scheduler [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3] ...
	I0717 00:27:17.144017   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:17.189135   13048 logs.go:123] Gathering logs for kube-controller-manager [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f] ...
	I0717 00:27:17.189162   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:17.246761   13048 logs.go:123] Gathering logs for container status ...
	I0717 00:27:17.246799   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:27:17.307150   13048 logs.go:123] Gathering logs for kubelet ...
	I0717 00:27:17.307178   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:27:17.313502   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:17.389166   13048 logs.go:123] Gathering logs for dmesg ...
	I0717 00:27:17.389198   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:27:17.404438   13048 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:27:17.404463   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:27:17.504053   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:17.538326   13048 logs.go:123] Gathering logs for kube-apiserver [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7] ...
	I0717 00:27:17.538352   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:17.587390   13048 logs.go:123] Gathering logs for coredns [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e] ...
	I0717 00:27:17.587415   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:17.624869   13048 logs.go:123] Gathering logs for etcd [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791] ...
	I0717 00:27:17.624899   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:17.684476   13048 logs.go:123] Gathering logs for kube-proxy [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf] ...
	I0717 00:27:17.684511   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:17.724136   13048 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:27:17.724165   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:27:17.814045   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:18.002187   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:18.314741   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:18.507329   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:18.814205   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:19.002459   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:19.313337   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:19.502194   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:19.813731   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:20.001740   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:20.313604   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:20.501412   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:20.728241   13048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:20.748426   13048 api_server.go:72] duration metric: took 2m0.395955489s to wait for apiserver process to appear ...
	I0717 00:27:20.748452   13048 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:27:20.748478   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:27:20.748525   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:27:20.790496   13048 cri.go:89] found id: "da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:20.790519   13048 cri.go:89] found id: ""
	I0717 00:27:20.790526   13048 logs.go:276] 1 containers: [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7]
	I0717 00:27:20.790590   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:20.796402   13048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:27:20.796469   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:27:20.813630   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:20.841409   13048 cri.go:89] found id: "b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:20.841436   13048 cri.go:89] found id: ""
	I0717 00:27:20.841445   13048 logs.go:276] 1 containers: [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791]
	I0717 00:27:20.841498   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:20.845709   13048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:27:20.845760   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:27:20.895540   13048 cri.go:89] found id: "0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:20.895569   13048 cri.go:89] found id: ""
	I0717 00:27:20.895578   13048 logs.go:276] 1 containers: [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e]
	I0717 00:27:20.895632   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:20.899816   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:27:20.899881   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:27:20.941300   13048 cri.go:89] found id: "69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:20.941328   13048 cri.go:89] found id: ""
	I0717 00:27:20.941336   13048 logs.go:276] 1 containers: [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3]
	I0717 00:27:20.941386   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:20.947312   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:27:20.947361   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:27:20.989117   13048 cri.go:89] found id: "a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:20.989136   13048 cri.go:89] found id: ""
	I0717 00:27:20.989143   13048 logs.go:276] 1 containers: [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf]
	I0717 00:27:20.989192   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:20.993916   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:27:20.993969   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:27:21.003383   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:21.032585   13048 cri.go:89] found id: "229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:21.032604   13048 cri.go:89] found id: ""
	I0717 00:27:21.032611   13048 logs.go:276] 1 containers: [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f]
	I0717 00:27:21.032665   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:21.036603   13048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:27:21.036673   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:27:21.088582   13048 cri.go:89] found id: ""
	I0717 00:27:21.088608   13048 logs.go:276] 0 containers: []
	W0717 00:27:21.088616   13048 logs.go:278] No container was found matching "kindnet"
	I0717 00:27:21.088624   13048 logs.go:123] Gathering logs for dmesg ...
	I0717 00:27:21.088635   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:27:21.104040   13048 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:27:21.104072   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:27:21.227134   13048 logs.go:123] Gathering logs for etcd [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791] ...
	I0717 00:27:21.227159   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:21.282641   13048 logs.go:123] Gathering logs for kube-controller-manager [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f] ...
	I0717 00:27:21.282672   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:21.315375   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:21.363085   13048 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:27:21.363120   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:27:21.502082   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:21.814045   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:22.003770   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:22.130168   13048 logs.go:123] Gathering logs for container status ...
	I0717 00:27:22.130205   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:27:22.181487   13048 logs.go:123] Gathering logs for kubelet ...
	I0717 00:27:22.181514   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:27:22.268749   13048 logs.go:123] Gathering logs for kube-apiserver [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7] ...
	I0717 00:27:22.268798   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:22.314131   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:22.318512   13048 logs.go:123] Gathering logs for coredns [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e] ...
	I0717 00:27:22.318557   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:22.360534   13048 logs.go:123] Gathering logs for kube-scheduler [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3] ...
	I0717 00:27:22.360562   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:22.409369   13048 logs.go:123] Gathering logs for kube-proxy [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf] ...
	I0717 00:27:22.409404   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:22.503731   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:22.813904   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:23.003350   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:23.314169   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:23.503278   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:23.813397   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:24.003358   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:24.313632   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:24.503073   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:24.812756   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:24.946681   13048 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0717 00:27:24.951380   13048 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I0717 00:27:24.952227   13048 api_server.go:141] control plane version: v1.30.2
	I0717 00:27:24.952248   13048 api_server.go:131] duration metric: took 4.203791958s to wait for apiserver health ...
	I0717 00:27:24.952255   13048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:27:24.952274   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:27:24.952314   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:27:24.990327   13048 cri.go:89] found id: "da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:24.990347   13048 cri.go:89] found id: ""
	I0717 00:27:24.990356   13048 logs.go:276] 1 containers: [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7]
	I0717 00:27:24.990415   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:24.994421   13048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:27:24.994477   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:27:25.005545   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:25.051652   13048 cri.go:89] found id: "b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:25.051685   13048 cri.go:89] found id: ""
	I0717 00:27:25.051695   13048 logs.go:276] 1 containers: [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791]
	I0717 00:27:25.051752   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:25.056135   13048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:27:25.056198   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:27:25.104485   13048 cri.go:89] found id: "0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:25.104512   13048 cri.go:89] found id: ""
	I0717 00:27:25.104534   13048 logs.go:276] 1 containers: [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e]
	I0717 00:27:25.104590   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:25.108935   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:27:25.109007   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:27:25.154746   13048 cri.go:89] found id: "69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:25.154766   13048 cri.go:89] found id: ""
	I0717 00:27:25.154775   13048 logs.go:276] 1 containers: [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3]
	I0717 00:27:25.154829   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:25.159320   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:27:25.159370   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:27:25.198181   13048 cri.go:89] found id: "a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:25.198210   13048 cri.go:89] found id: ""
	I0717 00:27:25.198218   13048 logs.go:276] 1 containers: [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf]
	I0717 00:27:25.198266   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:25.202773   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:27:25.202840   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:27:25.254005   13048 cri.go:89] found id: "229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:25.254030   13048 cri.go:89] found id: ""
	I0717 00:27:25.254039   13048 logs.go:276] 1 containers: [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f]
	I0717 00:27:25.254095   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:25.258436   13048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:27:25.258496   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:27:25.297168   13048 cri.go:89] found id: ""
	I0717 00:27:25.297195   13048 logs.go:276] 0 containers: []
	W0717 00:27:25.297203   13048 logs.go:278] No container was found matching "kindnet"
	I0717 00:27:25.297211   13048 logs.go:123] Gathering logs for kubelet ...
	I0717 00:27:25.297221   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:27:25.313798   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:25.382471   13048 logs.go:123] Gathering logs for kube-apiserver [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7] ...
	I0717 00:27:25.382506   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:25.433245   13048 logs.go:123] Gathering logs for kube-scheduler [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3] ...
	I0717 00:27:25.433274   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:25.490753   13048 logs.go:123] Gathering logs for container status ...
	I0717 00:27:25.490786   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:27:25.503230   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:25.549680   13048 logs.go:123] Gathering logs for dmesg ...
	I0717 00:27:25.549709   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:27:25.564895   13048 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:27:25.564926   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:27:25.681789   13048 logs.go:123] Gathering logs for etcd [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791] ...
	I0717 00:27:25.681834   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:25.738851   13048 logs.go:123] Gathering logs for coredns [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e] ...
	I0717 00:27:25.738888   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:25.781125   13048 logs.go:123] Gathering logs for kube-proxy [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf] ...
	I0717 00:27:25.781159   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:25.813835   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:25.823555   13048 logs.go:123] Gathering logs for kube-controller-manager [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f] ...
	I0717 00:27:25.823577   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:25.890737   13048 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:27:25.890770   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:27:26.003708   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:26.313918   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:26.510458   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:26.812914   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:27.001918   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:27.313800   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:27.502420   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:27.813354   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:28.002092   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:28.313131   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:28.502420   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:28.813383   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:29.002699   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:29.252761   13048 system_pods.go:59] 18 kube-system pods found
	I0717 00:27:29.252791   13048 system_pods.go:61] "coredns-7db6d8ff4d-bpp2w" [0d4f8b36-6961-478d-bbe7-5aded14a13ea] Running
	I0717 00:27:29.252795   13048 system_pods.go:61] "csi-hostpath-attacher-0" [8956758d-a3be-46a1-82c1-768f90c29424] Running
	I0717 00:27:29.252798   13048 system_pods.go:61] "csi-hostpath-resizer-0" [b89df6f6-48a1-4afd-b393-33932498e6e7] Running
	I0717 00:27:29.252801   13048 system_pods.go:61] "csi-hostpathplugin-96mlp" [8f0f8500-9872-4d20-9442-c719eae3b46b] Running
	I0717 00:27:29.252806   13048 system_pods.go:61] "etcd-addons-384227" [7803d027-ae67-4808-90d9-34d25a1f869b] Running
	I0717 00:27:29.252809   13048 system_pods.go:61] "kube-apiserver-addons-384227" [c8f18b31-b600-4d33-a43d-bf96e700fbda] Running
	I0717 00:27:29.252812   13048 system_pods.go:61] "kube-controller-manager-addons-384227" [f13bcf7c-6e34-4d97-97a6-90958791cb01] Running
	I0717 00:27:29.252816   13048 system_pods.go:61] "kube-ingress-dns-minikube" [959e53f2-7e3f-452f-b7ce-9f9134926b56] Running
	I0717 00:27:29.252818   13048 system_pods.go:61] "kube-proxy-9j492" [74949344-2223-4f8d-bc35-737de5d7f6e9] Running
	I0717 00:27:29.252821   13048 system_pods.go:61] "kube-scheduler-addons-384227" [13d1c064-225b-41db-bbbf-8e140311aaf0] Running
	I0717 00:27:29.252825   13048 system_pods.go:61] "metrics-server-c59844bb4-ptnnk" [3c732a54-ac1f-4d2b-8090-29a97aac2ca5] Running
	I0717 00:27:29.252828   13048 system_pods.go:61] "nvidia-device-plugin-daemonset-v6tmh" [cbb5bf86-4332-4b45-b6cf-4c77245158ed] Running
	I0717 00:27:29.252830   13048 system_pods.go:61] "registry-proxy-n2f8j" [b4af5a32-5f55-4f42-8506-d84f33c037ee] Running
	I0717 00:27:29.252833   13048 system_pods.go:61] "registry-wjhgl" [3387114c-1fe0-4740-98da-750978da9284] Running
	I0717 00:27:29.252835   13048 system_pods.go:61] "snapshot-controller-745499f584-d8fzs" [789ce441-6886-4b58-a02d-299ab7eb6f17] Running
	I0717 00:27:29.252838   13048 system_pods.go:61] "snapshot-controller-745499f584-hz4l5" [d27abf24-4a54-4c80-a3ea-04e54e66e0cb] Running
	I0717 00:27:29.252840   13048 system_pods.go:61] "storage-provisioner" [076c6e29-09df-469d-ae38-fe3a33503a57] Running
	I0717 00:27:29.252843   13048 system_pods.go:61] "tiller-deploy-6677d64bcd-h842v" [39eb0880-886d-42e4-b134-ac0f48c445e8] Running
	I0717 00:27:29.252848   13048 system_pods.go:74] duration metric: took 4.300588879s to wait for pod list to return data ...
	I0717 00:27:29.252854   13048 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:27:29.255249   13048 default_sa.go:45] found service account: "default"
	I0717 00:27:29.255267   13048 default_sa.go:55] duration metric: took 2.407731ms for default service account to be created ...
	I0717 00:27:29.255275   13048 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:27:29.262808   13048 system_pods.go:86] 18 kube-system pods found
	I0717 00:27:29.262829   13048 system_pods.go:89] "coredns-7db6d8ff4d-bpp2w" [0d4f8b36-6961-478d-bbe7-5aded14a13ea] Running
	I0717 00:27:29.262834   13048 system_pods.go:89] "csi-hostpath-attacher-0" [8956758d-a3be-46a1-82c1-768f90c29424] Running
	I0717 00:27:29.262839   13048 system_pods.go:89] "csi-hostpath-resizer-0" [b89df6f6-48a1-4afd-b393-33932498e6e7] Running
	I0717 00:27:29.262842   13048 system_pods.go:89] "csi-hostpathplugin-96mlp" [8f0f8500-9872-4d20-9442-c719eae3b46b] Running
	I0717 00:27:29.262846   13048 system_pods.go:89] "etcd-addons-384227" [7803d027-ae67-4808-90d9-34d25a1f869b] Running
	I0717 00:27:29.262850   13048 system_pods.go:89] "kube-apiserver-addons-384227" [c8f18b31-b600-4d33-a43d-bf96e700fbda] Running
	I0717 00:27:29.262854   13048 system_pods.go:89] "kube-controller-manager-addons-384227" [f13bcf7c-6e34-4d97-97a6-90958791cb01] Running
	I0717 00:27:29.262858   13048 system_pods.go:89] "kube-ingress-dns-minikube" [959e53f2-7e3f-452f-b7ce-9f9134926b56] Running
	I0717 00:27:29.262862   13048 system_pods.go:89] "kube-proxy-9j492" [74949344-2223-4f8d-bc35-737de5d7f6e9] Running
	I0717 00:27:29.262865   13048 system_pods.go:89] "kube-scheduler-addons-384227" [13d1c064-225b-41db-bbbf-8e140311aaf0] Running
	I0717 00:27:29.262869   13048 system_pods.go:89] "metrics-server-c59844bb4-ptnnk" [3c732a54-ac1f-4d2b-8090-29a97aac2ca5] Running
	I0717 00:27:29.262875   13048 system_pods.go:89] "nvidia-device-plugin-daemonset-v6tmh" [cbb5bf86-4332-4b45-b6cf-4c77245158ed] Running
	I0717 00:27:29.262881   13048 system_pods.go:89] "registry-proxy-n2f8j" [b4af5a32-5f55-4f42-8506-d84f33c037ee] Running
	I0717 00:27:29.262886   13048 system_pods.go:89] "registry-wjhgl" [3387114c-1fe0-4740-98da-750978da9284] Running
	I0717 00:27:29.262890   13048 system_pods.go:89] "snapshot-controller-745499f584-d8fzs" [789ce441-6886-4b58-a02d-299ab7eb6f17] Running
	I0717 00:27:29.262894   13048 system_pods.go:89] "snapshot-controller-745499f584-hz4l5" [d27abf24-4a54-4c80-a3ea-04e54e66e0cb] Running
	I0717 00:27:29.262902   13048 system_pods.go:89] "storage-provisioner" [076c6e29-09df-469d-ae38-fe3a33503a57] Running
	I0717 00:27:29.262908   13048 system_pods.go:89] "tiller-deploy-6677d64bcd-h842v" [39eb0880-886d-42e4-b134-ac0f48c445e8] Running
	I0717 00:27:29.262914   13048 system_pods.go:126] duration metric: took 7.633601ms to wait for k8s-apps to be running ...
	I0717 00:27:29.262920   13048 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:27:29.262960   13048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:29.279262   13048 system_svc.go:56] duration metric: took 16.334722ms WaitForService to wait for kubelet
	I0717 00:27:29.279291   13048 kubeadm.go:582] duration metric: took 2m8.926823077s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:27:29.279319   13048 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:27:29.281909   13048 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:27:29.281934   13048 node_conditions.go:123] node cpu capacity is 2
	I0717 00:27:29.281946   13048 node_conditions.go:105] duration metric: took 2.621134ms to run NodePressure ...
	I0717 00:27:29.281956   13048 start.go:241] waiting for startup goroutines ...
	I0717 00:27:29.313504   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:29.504173   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:29.813157   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:30.002741   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:30.313584   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:30.502018   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:30.813135   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:31.002225   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:31.314033   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:31.501751   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:31.813776   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:32.007705   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:32.313327   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:32.505930   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:32.813806   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:33.004586   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:33.314080   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:33.502213   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:33.813846   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:34.002922   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:34.313186   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:34.505180   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:34.813708   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:35.002391   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:35.313037   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:35.501771   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:35.813780   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:36.002854   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:36.314078   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:36.502236   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:36.813509   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:37.002316   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:37.313338   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:37.502567   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:37.813895   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:38.002008   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:38.313618   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:38.501611   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:38.813649   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:39.002265   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:39.312364   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:39.505028   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:39.813558   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:40.002588   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:40.312831   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:40.503876   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:40.813732   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:41.001811   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:41.313941   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:41.503554   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:41.813677   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:42.002360   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:42.313296   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:42.504186   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:42.812955   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:43.002374   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:43.314242   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:43.503604   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:43.813361   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:44.002683   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:44.313042   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:44.508105   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:44.812922   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:45.001804   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:45.313811   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:45.501651   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:45.813604   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:46.004321   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:46.312561   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:46.504676   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:46.813309   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:47.001710   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:47.314867   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:47.501909   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:47.814508   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:48.004379   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:48.314270   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:48.502366   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:48.812694   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:49.003874   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:49.314431   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:49.502390   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:50.145862   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:50.147648   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:50.313691   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:50.504708   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:50.814126   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:51.002513   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:51.313854   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:51.501973   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:51.813613   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:52.003174   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:52.315534   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:52.509961   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:52.813659   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:53.002541   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:53.313750   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:53.501967   13048 kapi.go:107] duration metric: took 2m24.504491423s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 00:27:53.813523   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:54.312746   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:54.812934   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:55.313484   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:55.813497   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:56.313517   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:56.813283   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:57.471258   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:57.813828   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:58.313114   13048 kapi.go:107] duration metric: took 2m26.503531732s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 00:27:58.314824   13048 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-384227 cluster.
	I0717 00:27:58.316226   13048 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 00:27:58.317275   13048 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 00:27:58.318382   13048 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, helm-tiller, metrics-server, inspektor-gadget, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 00:27:58.319617   13048 addons.go:510] duration metric: took 2m37.967124214s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner helm-tiller metrics-server inspektor-gadget ingress-dns yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 00:27:58.319657   13048 start.go:246] waiting for cluster config update ...
	I0717 00:27:58.319689   13048 start.go:255] writing updated cluster config ...
	I0717 00:27:58.319950   13048 ssh_runner.go:195] Run: rm -f paused
	I0717 00:27:58.368668   13048 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:27:58.370598   13048 out.go:177] * Done! kubectl is now configured to use "addons-384227" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.119023410Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176263118953764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e754145-b22b-4789-a790-8fd6fc1fd489 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.119498367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=715b5a54-f50d-44f2-bc22-477b41d62e8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.119570954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=715b5a54-f50d-44f2-bc22-477b41d62e8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.120048093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e72d83e9b5c9d82b9f6b3d597d215164e0beebd5b195cf258344c9306c869b26,PodSandboxId:79c01f50e1dd893e76ebb15ed5073bb62f5e021c6ed2c8f574d6629cf7446d6a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721176257409503441,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-7dd2l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83fa3d37-3105-471c-845f-7da9033760e7,},Annotations:map[string]string{io.kubernetes.container.hash: efd60ad4,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d04a55af0f41650383d558f5ead4f195d48bc0df1c6870b8e3d67fbdd8d7b2,PodSandboxId:2cee3fd0c167104c8b363d045beed2e397e7327f82428cf5917ee6176cc245cb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721176116599378585,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30ead786-c960-40a8-a321-2f7f774d10f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b8e0d1c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f231f824b96088f6892f30b23977945ec76b3f3f15ea1d13e33facbf2f421c,PodSandboxId:fa53b749e3d22bca71a68cac7665772627c0c77fac88f24097f8ae136bbcdbbd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721176097247719519,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-7xwpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f011f7f1-0d18-4279-850b-076dcfcd6908,},Annotations:map[string]string{io.kubernetes.container.hash: f82954c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94408e45cb997f29a2489c6e0dde799186ae8e7b7d275d919d91531c55b579da,PodSandboxId:0c8a3036510204fe1af0c73baae3ec9b0477d4b24b426484d16441c3f6748311,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721176077898809247,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q2bzh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0e27fbdd-f0bc-44e2-a3e0-097e976a4a65,},Annotations:map[string]string{io.kubernetes.container.hash: 98cdb06c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e294a534ddd7e802dd02f7ca5190a66a5510c9e6da657fe4b895f1c04eb2e49,PodSandboxId:1545d240dcd8ff15bb01402f8cb90bd3a5aeaf1a3297b5fac31341aadc0d68d3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721176018871210499,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k4b5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 42e450b6-2b97-4692-b260-2e32356e153e,},Annotations:map[string]string{io.kubernetes.container.hash: 600c2a7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e3bad5e6e935c0ca1502bef8e0da51a05241d9980a5229819c14f5bc61854,PodSandboxId:9831b93895d2e083a6211a2837cb861dbdd8b45ecd5a9a95f8ef074052f4e14e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721176018765636008,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4z9qn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 87213e86-dd23-4707-8d1c-d7bbb58262b9,},Annotations:map[string]string{io.kubernetes.container.hash: a937b98c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cb930070e7d6e1ea4867c649aff3173f1433ef390032352666a2e71a23fcd0,PodSandboxId:5e4fda911a7a2e9811c3048f6276cd2453d949c6a6111b4f3e35f58817e6a661,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721175998653196144,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7sd2x,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb27b9e7-2f58-4ceb-b978-c36e88e06724,},Annotations:map[string]string{io.kubernetes.container.hash: 2db43ed4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a0c16d156ac4f26606d741824826ef9651d013f8060c0a4f76cc1ef0f65c42,PodSandboxId:a0cdb9f22c37433ab387051e61c0c691567aa6b6a684240ad8ccca8abedcedc7,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721175981221455797,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-5nswx,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f7985a97-d5e5-4554-b699-b8a01e187c7e,},Annotations:map[string]string{io.kubernetes.container.hash: f8d9ce34,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3,PodSandboxId:981a8872fa005df2f1556778461f2d9ef2b23c48d2d81373545aa7003fd35930,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-serve
r/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721175974627515033,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-ptnnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c732a54-ac1f-4d2b-8090-29a97aac2ca5,},Annotations:map[string]string{io.kubernetes.container.hash: 73ba5e1e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f11103cc7df3635c1fbf9fb47b6f3a51eea6db7e78b2157fc67acfc2ea48721,PodSandboxId:e2fb26f681873456611c0c1b2a6f351494c6e
0ea4d0e328c4d190f14bbce5b4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721175927146862191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076c6e29-09df-469d-ae38-fe3a33503a57,},Annotations:map[string]string{io.kubernetes.container.hash: a3725e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e,PodSandboxId:889706a15ed043bce6e674ae47b9231f01559edb63b50514c
8f794160938c8fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175924618196892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bpp2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4f8b36-6961-478d-bbe7-5aded14a13ea,},Annotations:map[string]string{io.kubernetes.container.hash: d83c619d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf,PodSandboxId:fe5d0c7e3456871b3b06ac1d05703e21521e75395b4bead2abbee531ac8a0692,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175921450104483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9j492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74949344-2223-4f8d-bc35-737de5d7f6e9,},Annotations:map[string]string{io.kubernetes.container.hash: 82657c1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3,PodSandboxId:0353bc35834e7a57cf74175cd9d37c05e9a4c04013e53c59aea31ce7c9323adb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175901941504017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d17ebeaf4ff849d0ec1464ca5a7ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f,PodSandboxId:9082a8c6c36587af3f03297c0748e95e6d3c07d477cad43e2a9f9ca82017872a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175901917599561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf035bf52a69f65af123fbdf3e000bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791,PodSandboxId:8e8f4d736d14524ec9ddd52fc06006289e179ed7d9400d8e115959ce227654b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175901850939448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47b84062fa6b411a724bed1aa03732a,},Annotations:map[string]string{io.kubernetes.container.hash: 841c18dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da60884c96d8871a1838
a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7,PodSandboxId:db2bb303a90cfc338e7ad665f9eb2392af5f30024759d91d97703673f4e69975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175901815159042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed91ad1ae7f2691bb897d09b2220db50,},Annotations:map[string]string{io.kubernetes.container.hash: 34b6a3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71
5b5a54-f50d-44f2-bc22-477b41d62e8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.166210607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=333d047c-a014-4c48-8467-dd8bb2efd476 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.166289604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=333d047c-a014-4c48-8467-dd8bb2efd476 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.167803125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25c8432e-38c1-45ae-bf68-feefc9e12ff0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.169658031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176263169621763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25c8432e-38c1-45ae-bf68-feefc9e12ff0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.170477954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb3e59d4-bca8-421f-a373-b6bae326ab96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.170604027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb3e59d4-bca8-421f-a373-b6bae326ab96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.170958122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e72d83e9b5c9d82b9f6b3d597d215164e0beebd5b195cf258344c9306c869b26,PodSandboxId:79c01f50e1dd893e76ebb15ed5073bb62f5e021c6ed2c8f574d6629cf7446d6a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721176257409503441,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-7dd2l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83fa3d37-3105-471c-845f-7da9033760e7,},Annotations:map[string]string{io.kubernetes.container.hash: efd60ad4,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d04a55af0f41650383d558f5ead4f195d48bc0df1c6870b8e3d67fbdd8d7b2,PodSandboxId:2cee3fd0c167104c8b363d045beed2e397e7327f82428cf5917ee6176cc245cb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721176116599378585,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30ead786-c960-40a8-a321-2f7f774d10f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b8e0d1c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f231f824b96088f6892f30b23977945ec76b3f3f15ea1d13e33facbf2f421c,PodSandboxId:fa53b749e3d22bca71a68cac7665772627c0c77fac88f24097f8ae136bbcdbbd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721176097247719519,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-7xwpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f011f7f1-0d18-4279-850b-076dcfcd6908,},Annotations:map[string]string{io.kubernetes.container.hash: f82954c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94408e45cb997f29a2489c6e0dde799186ae8e7b7d275d919d91531c55b579da,PodSandboxId:0c8a3036510204fe1af0c73baae3ec9b0477d4b24b426484d16441c3f6748311,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721176077898809247,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q2bzh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0e27fbdd-f0bc-44e2-a3e0-097e976a4a65,},Annotations:map[string]string{io.kubernetes.container.hash: 98cdb06c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e294a534ddd7e802dd02f7ca5190a66a5510c9e6da657fe4b895f1c04eb2e49,PodSandboxId:1545d240dcd8ff15bb01402f8cb90bd3a5aeaf1a3297b5fac31341aadc0d68d3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721176018871210499,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k4b5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 42e450b6-2b97-4692-b260-2e32356e153e,},Annotations:map[string]string{io.kubernetes.container.hash: 600c2a7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e3bad5e6e935c0ca1502bef8e0da51a05241d9980a5229819c14f5bc61854,PodSandboxId:9831b93895d2e083a6211a2837cb861dbdd8b45ecd5a9a95f8ef074052f4e14e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721176018765636008,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4z9qn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 87213e86-dd23-4707-8d1c-d7bbb58262b9,},Annotations:map[string]string{io.kubernetes.container.hash: a937b98c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cb930070e7d6e1ea4867c649aff3173f1433ef390032352666a2e71a23fcd0,PodSandboxId:5e4fda911a7a2e9811c3048f6276cd2453d949c6a6111b4f3e35f58817e6a661,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721175998653196144,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7sd2x,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb27b9e7-2f58-4ceb-b978-c36e88e06724,},Annotations:map[string]string{io.kubernetes.container.hash: 2db43ed4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a0c16d156ac4f26606d741824826ef9651d013f8060c0a4f76cc1ef0f65c42,PodSandboxId:a0cdb9f22c37433ab387051e61c0c691567aa6b6a684240ad8ccca8abedcedc7,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721175981221455797,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-5nswx,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f7985a97-d5e5-4554-b699-b8a01e187c7e,},Annotations:map[string]string{io.kubernetes.container.hash: f8d9ce34,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3,PodSandboxId:981a8872fa005df2f1556778461f2d9ef2b23c48d2d81373545aa7003fd35930,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-serve
r/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721175974627515033,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-ptnnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c732a54-ac1f-4d2b-8090-29a97aac2ca5,},Annotations:map[string]string{io.kubernetes.container.hash: 73ba5e1e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f11103cc7df3635c1fbf9fb47b6f3a51eea6db7e78b2157fc67acfc2ea48721,PodSandboxId:e2fb26f681873456611c0c1b2a6f351494c6e
0ea4d0e328c4d190f14bbce5b4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721175927146862191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076c6e29-09df-469d-ae38-fe3a33503a57,},Annotations:map[string]string{io.kubernetes.container.hash: a3725e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e,PodSandboxId:889706a15ed043bce6e674ae47b9231f01559edb63b50514c
8f794160938c8fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175924618196892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bpp2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4f8b36-6961-478d-bbe7-5aded14a13ea,},Annotations:map[string]string{io.kubernetes.container.hash: d83c619d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf,PodSandboxId:fe5d0c7e3456871b3b06ac1d05703e21521e75395b4bead2abbee531ac8a0692,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175921450104483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9j492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74949344-2223-4f8d-bc35-737de5d7f6e9,},Annotations:map[string]string{io.kubernetes.container.hash: 82657c1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3,PodSandboxId:0353bc35834e7a57cf74175cd9d37c05e9a4c04013e53c59aea31ce7c9323adb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175901941504017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d17ebeaf4ff849d0ec1464ca5a7ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f,PodSandboxId:9082a8c6c36587af3f03297c0748e95e6d3c07d477cad43e2a9f9ca82017872a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175901917599561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf035bf52a69f65af123fbdf3e000bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791,PodSandboxId:8e8f4d736d14524ec9ddd52fc06006289e179ed7d9400d8e115959ce227654b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175901850939448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47b84062fa6b411a724bed1aa03732a,},Annotations:map[string]string{io.kubernetes.container.hash: 841c18dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da60884c96d8871a1838
a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7,PodSandboxId:db2bb303a90cfc338e7ad665f9eb2392af5f30024759d91d97703673f4e69975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175901815159042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed91ad1ae7f2691bb897d09b2220db50,},Annotations:map[string]string{io.kubernetes.container.hash: 34b6a3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb
3e59d4-bca8-421f-a373-b6bae326ab96 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.207891959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00664823-b208-4a52-86cd-4bfded23af41 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.208131368Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00664823-b208-4a52-86cd-4bfded23af41 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.209590229Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31f12975-f85f-4793-b11e-bb403bb2ad1d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.210944892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176263210918113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31f12975-f85f-4793-b11e-bb403bb2ad1d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.211563999Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adda765d-534f-446d-9863-c77df1abb914 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.211622596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adda765d-534f-446d-9863-c77df1abb914 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.212197441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e72d83e9b5c9d82b9f6b3d597d215164e0beebd5b195cf258344c9306c869b26,PodSandboxId:79c01f50e1dd893e76ebb15ed5073bb62f5e021c6ed2c8f574d6629cf7446d6a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721176257409503441,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-7dd2l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83fa3d37-3105-471c-845f-7da9033760e7,},Annotations:map[string]string{io.kubernetes.container.hash: efd60ad4,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d04a55af0f41650383d558f5ead4f195d48bc0df1c6870b8e3d67fbdd8d7b2,PodSandboxId:2cee3fd0c167104c8b363d045beed2e397e7327f82428cf5917ee6176cc245cb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721176116599378585,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30ead786-c960-40a8-a321-2f7f774d10f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b8e0d1c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f231f824b96088f6892f30b23977945ec76b3f3f15ea1d13e33facbf2f421c,PodSandboxId:fa53b749e3d22bca71a68cac7665772627c0c77fac88f24097f8ae136bbcdbbd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721176097247719519,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-7xwpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f011f7f1-0d18-4279-850b-076dcfcd6908,},Annotations:map[string]string{io.kubernetes.container.hash: f82954c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94408e45cb997f29a2489c6e0dde799186ae8e7b7d275d919d91531c55b579da,PodSandboxId:0c8a3036510204fe1af0c73baae3ec9b0477d4b24b426484d16441c3f6748311,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721176077898809247,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q2bzh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0e27fbdd-f0bc-44e2-a3e0-097e976a4a65,},Annotations:map[string]string{io.kubernetes.container.hash: 98cdb06c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e294a534ddd7e802dd02f7ca5190a66a5510c9e6da657fe4b895f1c04eb2e49,PodSandboxId:1545d240dcd8ff15bb01402f8cb90bd3a5aeaf1a3297b5fac31341aadc0d68d3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721176018871210499,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k4b5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 42e450b6-2b97-4692-b260-2e32356e153e,},Annotations:map[string]string{io.kubernetes.container.hash: 600c2a7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e3bad5e6e935c0ca1502bef8e0da51a05241d9980a5229819c14f5bc61854,PodSandboxId:9831b93895d2e083a6211a2837cb861dbdd8b45ecd5a9a95f8ef074052f4e14e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721176018765636008,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4z9qn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 87213e86-dd23-4707-8d1c-d7bbb58262b9,},Annotations:map[string]string{io.kubernetes.container.hash: a937b98c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cb930070e7d6e1ea4867c649aff3173f1433ef390032352666a2e71a23fcd0,PodSandboxId:5e4fda911a7a2e9811c3048f6276cd2453d949c6a6111b4f3e35f58817e6a661,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721175998653196144,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7sd2x,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb27b9e7-2f58-4ceb-b978-c36e88e06724,},Annotations:map[string]string{io.kubernetes.container.hash: 2db43ed4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a0c16d156ac4f26606d741824826ef9651d013f8060c0a4f76cc1ef0f65c42,PodSandboxId:a0cdb9f22c37433ab387051e61c0c691567aa6b6a684240ad8ccca8abedcedc7,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721175981221455797,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-5nswx,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f7985a97-d5e5-4554-b699-b8a01e187c7e,},Annotations:map[string]string{io.kubernetes.container.hash: f8d9ce34,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3,PodSandboxId:981a8872fa005df2f1556778461f2d9ef2b23c48d2d81373545aa7003fd35930,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-serve
r/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721175974627515033,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-ptnnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c732a54-ac1f-4d2b-8090-29a97aac2ca5,},Annotations:map[string]string{io.kubernetes.container.hash: 73ba5e1e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f11103cc7df3635c1fbf9fb47b6f3a51eea6db7e78b2157fc67acfc2ea48721,PodSandboxId:e2fb26f681873456611c0c1b2a6f351494c6e
0ea4d0e328c4d190f14bbce5b4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721175927146862191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076c6e29-09df-469d-ae38-fe3a33503a57,},Annotations:map[string]string{io.kubernetes.container.hash: a3725e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e,PodSandboxId:889706a15ed043bce6e674ae47b9231f01559edb63b50514c
8f794160938c8fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175924618196892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bpp2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4f8b36-6961-478d-bbe7-5aded14a13ea,},Annotations:map[string]string{io.kubernetes.container.hash: d83c619d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf,PodSandboxId:fe5d0c7e3456871b3b06ac1d05703e21521e75395b4bead2abbee531ac8a0692,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175921450104483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9j492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74949344-2223-4f8d-bc35-737de5d7f6e9,},Annotations:map[string]string{io.kubernetes.container.hash: 82657c1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3,PodSandboxId:0353bc35834e7a57cf74175cd9d37c05e9a4c04013e53c59aea31ce7c9323adb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175901941504017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d17ebeaf4ff849d0ec1464ca5a7ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f,PodSandboxId:9082a8c6c36587af3f03297c0748e95e6d3c07d477cad43e2a9f9ca82017872a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175901917599561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf035bf52a69f65af123fbdf3e000bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791,PodSandboxId:8e8f4d736d14524ec9ddd52fc06006289e179ed7d9400d8e115959ce227654b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175901850939448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47b84062fa6b411a724bed1aa03732a,},Annotations:map[string]string{io.kubernetes.container.hash: 841c18dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da60884c96d8871a1838
a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7,PodSandboxId:db2bb303a90cfc338e7ad665f9eb2392af5f30024759d91d97703673f4e69975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175901815159042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed91ad1ae7f2691bb897d09b2220db50,},Annotations:map[string]string{io.kubernetes.container.hash: 34b6a3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad
da765d-534f-446d-9863-c77df1abb914 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.249067236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75acffd7-6fa1-4833-a697-69a688d13f4b name=/runtime.v1.RuntimeService/Version
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.249161913Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75acffd7-6fa1-4833-a697-69a688d13f4b name=/runtime.v1.RuntimeService/Version
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.250559713Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d900b1c3-d494-45a3-8114-71d8d8f11bf6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.252143981Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176263252113201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d900b1c3-d494-45a3-8114-71d8d8f11bf6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.252626547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a45d5169-8916-4de7-a6ad-203fed17c71c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.252679750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a45d5169-8916-4de7-a6ad-203fed17c71c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:31:03 addons-384227 crio[684]: time="2024-07-17 00:31:03.253230471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e72d83e9b5c9d82b9f6b3d597d215164e0beebd5b195cf258344c9306c869b26,PodSandboxId:79c01f50e1dd893e76ebb15ed5073bb62f5e021c6ed2c8f574d6629cf7446d6a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721176257409503441,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-7dd2l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83fa3d37-3105-471c-845f-7da9033760e7,},Annotations:map[string]string{io.kubernetes.container.hash: efd60ad4,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d04a55af0f41650383d558f5ead4f195d48bc0df1c6870b8e3d67fbdd8d7b2,PodSandboxId:2cee3fd0c167104c8b363d045beed2e397e7327f82428cf5917ee6176cc245cb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721176116599378585,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30ead786-c960-40a8-a321-2f7f774d10f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b8e0d1c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f231f824b96088f6892f30b23977945ec76b3f3f15ea1d13e33facbf2f421c,PodSandboxId:fa53b749e3d22bca71a68cac7665772627c0c77fac88f24097f8ae136bbcdbbd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721176097247719519,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-7xwpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f011f7f1-0d18-4279-850b-076dcfcd6908,},Annotations:map[string]string{io.kubernetes.container.hash: f82954c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94408e45cb997f29a2489c6e0dde799186ae8e7b7d275d919d91531c55b579da,PodSandboxId:0c8a3036510204fe1af0c73baae3ec9b0477d4b24b426484d16441c3f6748311,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721176077898809247,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q2bzh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0e27fbdd-f0bc-44e2-a3e0-097e976a4a65,},Annotations:map[string]string{io.kubernetes.container.hash: 98cdb06c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e294a534ddd7e802dd02f7ca5190a66a5510c9e6da657fe4b895f1c04eb2e49,PodSandboxId:1545d240dcd8ff15bb01402f8cb90bd3a5aeaf1a3297b5fac31341aadc0d68d3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721176018871210499,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k4b5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 42e450b6-2b97-4692-b260-2e32356e153e,},Annotations:map[string]string{io.kubernetes.container.hash: 600c2a7e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca0e3bad5e6e935c0ca1502bef8e0da51a05241d9980a5229819c14f5bc61854,PodSandboxId:9831b93895d2e083a6211a2837cb861dbdd8b45ecd5a9a95f8ef074052f4e14e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721176018765636008,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4z9qn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 87213e86-dd23-4707-8d1c-d7bbb58262b9,},Annotations:map[string]string{io.kubernetes.container.hash: a937b98c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cb930070e7d6e1ea4867c649aff3173f1433ef390032352666a2e71a23fcd0,PodSandboxId:5e4fda911a7a2e9811c3048f6276cd2453d949c6a6111b4f3e35f58817e6a661,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721175998653196144,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7sd2x,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb27b9e7-2f58-4ceb-b978-c36e88e06724,},Annotations:map[string]string{io.kubernetes.container.hash: 2db43ed4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a0c16d156ac4f26606d741824826ef9651d013f8060c0a4f76cc1ef0f65c42,PodSandboxId:a0cdb9f22c37433ab387051e61c0c691567aa6b6a684240ad8ccca8abedcedc7,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721175981221455797,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-5nswx,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f7985a97-d5e5-4554-b699-b8a01e187c7e,},Annotations:map[string]string{io.kubernetes.container.hash: f8d9ce34,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3,PodSandboxId:981a8872fa005df2f1556778461f2d9ef2b23c48d2d81373545aa7003fd35930,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-serve
r/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721175974627515033,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-ptnnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c732a54-ac1f-4d2b-8090-29a97aac2ca5,},Annotations:map[string]string{io.kubernetes.container.hash: 73ba5e1e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f11103cc7df3635c1fbf9fb47b6f3a51eea6db7e78b2157fc67acfc2ea48721,PodSandboxId:e2fb26f681873456611c0c1b2a6f351494c6e
0ea4d0e328c4d190f14bbce5b4d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721175927146862191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076c6e29-09df-469d-ae38-fe3a33503a57,},Annotations:map[string]string{io.kubernetes.container.hash: a3725e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e,PodSandboxId:889706a15ed043bce6e674ae47b9231f01559edb63b50514c
8f794160938c8fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175924618196892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bpp2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4f8b36-6961-478d-bbe7-5aded14a13ea,},Annotations:map[string]string{io.kubernetes.container.hash: d83c619d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf,PodSandboxId:fe5d0c7e3456871b3b06ac1d05703e21521e75395b4bead2abbee531ac8a0692,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175921450104483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9j492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74949344-2223-4f8d-bc35-737de5d7f6e9,},Annotations:map[string]string{io.kubernetes.container.hash: 82657c1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3,PodSandboxId:0353bc35834e7a57cf74175cd9d37c05e9a4c04013e53c59aea31ce7c9323adb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175901941504017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d17ebeaf4ff849d0ec1464ca5a7ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f,PodSandboxId:9082a8c6c36587af3f03297c0748e95e6d3c07d477cad43e2a9f9ca82017872a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175901917599561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf035bf52a69f65af123fbdf3e000bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePe
riod: 30,},},&Container{Id:b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791,PodSandboxId:8e8f4d736d14524ec9ddd52fc06006289e179ed7d9400d8e115959ce227654b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175901850939448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47b84062fa6b411a724bed1aa03732a,},Annotations:map[string]string{io.kubernetes.container.hash: 841c18dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da60884c96d8871a1838
a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7,PodSandboxId:db2bb303a90cfc338e7ad665f9eb2392af5f30024759d91d97703673f4e69975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175901815159042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed91ad1ae7f2691bb897d09b2220db50,},Annotations:map[string]string{io.kubernetes.container.hash: 34b6a3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4
5d5169-8916-4de7-a6ad-203fed17c71c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e72d83e9b5c9d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        5 seconds ago       Running             hello-world-app           0                   79c01f50e1dd8       hello-world-app-6778b5fc9f-7dd2l
	d5d04a55af0f4       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   2cee3fd0c1671       nginx
	f2f231f824b96       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   fa53b749e3d22       headlamp-7867546754-7xwpc
	94408e45cb997       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   0c8a303651020       gcp-auth-5db96cd9b4-q2bzh
	8e294a534ddd7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              patch                     0                   1545d240dcd8f       ingress-nginx-admission-patch-k4b5n
	ca0e3bad5e6e9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   9831b93895d2e       ingress-nginx-admission-create-4z9qn
	77cb930070e7d       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   5e4fda911a7a2       local-path-provisioner-8d985888d-7sd2x
	37a0c16d156ac       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              4 minutes ago       Running             yakd                      0                   a0cdb9f22c374       yakd-dashboard-799879c74f-5nswx
	df35c92d87069       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   981a8872fa005       metrics-server-c59844bb4-ptnnk
	6f11103cc7df3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   e2fb26f681873       storage-provisioner
	0209929ebeb61       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   889706a15ed04       coredns-7db6d8ff4d-bpp2w
	a96ca0d1a5578       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                             5 minutes ago       Running             kube-proxy                0                   fe5d0c7e34568       kube-proxy-9j492
	69c0125279f37       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                             6 minutes ago       Running             kube-scheduler            0                   0353bc35834e7       kube-scheduler-addons-384227
	229ef064e998c       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                             6 minutes ago       Running             kube-controller-manager   0                   9082a8c6c3658       kube-controller-manager-addons-384227
	b9851ce86beb5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             6 minutes ago       Running             etcd                      0                   8e8f4d736d145       etcd-addons-384227
	da60884c96d88       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                             6 minutes ago       Running             kube-apiserver            0                   db2bb303a90cf       kube-apiserver-addons-384227
	
	
	==> coredns [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e] <==
	[INFO] 10.244.0.8:55027 - 54143 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000466024s
	[INFO] 10.244.0.8:43443 - 33241 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095988s
	[INFO] 10.244.0.8:43443 - 468 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000211613s
	[INFO] 10.244.0.8:54113 - 25447 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061211s
	[INFO] 10.244.0.8:54113 - 18021 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000223573s
	[INFO] 10.244.0.8:45863 - 29037 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158485s
	[INFO] 10.244.0.8:45863 - 883 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000207196s
	[INFO] 10.244.0.8:40057 - 31269 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000072192s
	[INFO] 10.244.0.8:40057 - 64807 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00019205s
	[INFO] 10.244.0.8:59528 - 11336 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061484s
	[INFO] 10.244.0.8:59528 - 16206 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101821s
	[INFO] 10.244.0.8:43158 - 31253 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034669s
	[INFO] 10.244.0.8:43158 - 30743 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000024158s
	[INFO] 10.244.0.8:35027 - 21443 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000031607s
	[INFO] 10.244.0.8:35027 - 48833 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049999s
	[INFO] 10.244.0.22:35812 - 19701 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000617312s
	[INFO] 10.244.0.22:34451 - 48891 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000861602s
	[INFO] 10.244.0.22:45930 - 8800 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106058s
	[INFO] 10.244.0.22:38576 - 65027 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000088855s
	[INFO] 10.244.0.22:39604 - 20596 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000074083s
	[INFO] 10.244.0.22:45808 - 11643 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059131s
	[INFO] 10.244.0.22:46166 - 62943 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000328867s
	[INFO] 10.244.0.22:57922 - 47835 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000620649s
	[INFO] 10.244.0.25:51925 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000198131s
	[INFO] 10.244.0.25:48527 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134125s
	
	
	==> describe nodes <==
	Name:               addons-384227
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-384227
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=addons-384227
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_25_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-384227
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:25:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-384227
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:30:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:29:11 +0000   Wed, 17 Jul 2024 00:25:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:29:11 +0000   Wed, 17 Jul 2024 00:25:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:29:11 +0000   Wed, 17 Jul 2024 00:25:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:29:11 +0000   Wed, 17 Jul 2024 00:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    addons-384227
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 20653670fc6a410a9a9044868b0bb2a1
	  System UUID:                20653670-fc6a-410a-9a90-44868b0bb2a1
	  Boot ID:                    0f945363-fed3-48df-9f85-333a27814996
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-7dd2l          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-5db96cd9b4-q2bzh                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  headlamp                    headlamp-7867546754-7xwpc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 coredns-7db6d8ff4d-bpp2w                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m43s
	  kube-system                 etcd-addons-384227                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-apiserver-addons-384227              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-controller-manager-addons-384227     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-9j492                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-scheduler-addons-384227              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 metrics-server-c59844bb4-ptnnk            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m38s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  local-path-storage          local-path-provisioner-8d985888d-7sd2x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  yakd-dashboard              yakd-dashboard-799879c74f-5nswx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m2s (x8 over 6m2s)  kubelet          Node addons-384227 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s (x8 over 6m2s)  kubelet          Node addons-384227 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s (x7 over 6m2s)  kubelet          Node addons-384227 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m57s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m57s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m57s                kubelet          Node addons-384227 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s                kubelet          Node addons-384227 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s                kubelet          Node addons-384227 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m55s                kubelet          Node addons-384227 status is now: NodeReady
	  Normal  RegisteredNode           5m44s                node-controller  Node addons-384227 event: Registered Node addons-384227 in Controller
	
	
	==> dmesg <==
	[  +5.171397] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.586109] systemd-fstab-generator[1490]: Ignoring "noauto" option for root device
	[  +5.205811] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.012566] kauditd_printk_skb: 167 callbacks suppressed
	[  +6.741485] kauditd_printk_skb: 52 callbacks suppressed
	[Jul17 00:26] kauditd_printk_skb: 4 callbacks suppressed
	[ +32.069673] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.224653] kauditd_printk_skb: 23 callbacks suppressed
	[Jul17 00:27] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.098996] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.750463] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.562253] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.574850] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.345460] kauditd_printk_skb: 15 callbacks suppressed
	[Jul17 00:28] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.190883] kauditd_printk_skb: 55 callbacks suppressed
	[  +7.068065] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.104805] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.215283] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.036573] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.045936] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.491832] kauditd_printk_skb: 4 callbacks suppressed
	[Jul17 00:29] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.357291] kauditd_printk_skb: 33 callbacks suppressed
	[Jul17 00:30] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791] <==
	{"level":"warn","ts":"2024-07-17T00:27:11.2174Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.185323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85556"}
	{"level":"info","ts":"2024-07-17T00:27:11.217476Z","caller":"traceutil/trace.go:171","msg":"trace[1675704464] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1167; }","duration":"126.286912ms","start":"2024-07-17T00:27:11.09118Z","end":"2024-07-17T00:27:11.217467Z","steps":["trace[1675704464] 'agreement among raft nodes before linearized reading'  (duration: 125.771029ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:27:50.126375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.036251ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-07-17T00:27:50.126845Z","caller":"traceutil/trace.go:171","msg":"trace[182827127] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1255; }","duration":"331.526382ms","start":"2024-07-17T00:27:49.795294Z","end":"2024-07-17T00:27:50.12682Z","steps":["trace[182827127] 'range keys from in-memory index tree'  (duration: 330.904651ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:27:50.126901Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.682327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:27:50.126954Z","caller":"traceutil/trace.go:171","msg":"trace[1815284874] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1255; }","duration":"146.769839ms","start":"2024-07-17T00:27:49.980175Z","end":"2024-07-17T00:27:50.126945Z","steps":["trace[1815284874] 'count revisions from in-memory index tree'  (duration: 146.591172ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:27:50.12693Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:27:49.795282Z","time spent":"331.625378ms","remote":"127.0.0.1:52714","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4391,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-17T00:27:50.127163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.712898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-17T00:27:50.127203Z","caller":"traceutil/trace.go:171","msg":"trace[1165630251] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1255; }","duration":"143.751733ms","start":"2024-07-17T00:27:49.983444Z","end":"2024-07-17T00:27:50.127195Z","steps":["trace[1165630251] 'range keys from in-memory index tree'  (duration: 143.632386ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:27:57.450921Z","caller":"traceutil/trace.go:171","msg":"trace[1979275742] linearizableReadLoop","detail":"{readStateIndex:1329; appliedIndex:1328; }","duration":"200.542865ms","start":"2024-07-17T00:27:57.250363Z","end":"2024-07-17T00:27:57.450906Z","steps":["trace[1979275742] 'read index received'  (duration: 200.390653ms)","trace[1979275742] 'applied index is now lower than readState.Index'  (duration: 151.754µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:27:57.451075Z","caller":"traceutil/trace.go:171","msg":"trace[63194154] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"449.856243ms","start":"2024-07-17T00:27:57.001211Z","end":"2024-07-17T00:27:57.451067Z","steps":["trace[63194154] 'process raft request'  (duration: 449.592836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:27:57.451161Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:27:57.001196Z","time spent":"449.904812ms","remote":"127.0.0.1:52772","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1258 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-07-17T00:27:57.451377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.222561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-07-17T00:27:57.452071Z","caller":"traceutil/trace.go:171","msg":"trace[1935906043] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1278; }","duration":"156.918808ms","start":"2024-07-17T00:27:57.295142Z","end":"2024-07-17T00:27:57.45206Z","steps":["trace[1935906043] 'agreement among raft nodes before linearized reading'  (duration: 156.144783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:27:57.451514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.145784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-07-17T00:27:57.452574Z","caller":"traceutil/trace.go:171","msg":"trace[917680487] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1278; }","duration":"202.226302ms","start":"2024-07-17T00:27:57.250338Z","end":"2024-07-17T00:27:57.452564Z","steps":["trace[917680487] 'agreement among raft nodes before linearized reading'  (duration: 201.077172ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:28:15.832276Z","caller":"traceutil/trace.go:171","msg":"trace[906998417] transaction","detail":"{read_only:false; response_revision:1415; number_of_response:1; }","duration":"142.631563ms","start":"2024-07-17T00:28:15.689626Z","end":"2024-07-17T00:28:15.832258Z","steps":["trace[906998417] 'process raft request'  (duration: 142.434198ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:28:44.782786Z","caller":"traceutil/trace.go:171","msg":"trace[109758002] linearizableReadLoop","detail":"{readStateIndex:1673; appliedIndex:1672; }","duration":"295.246877ms","start":"2024-07-17T00:28:44.487496Z","end":"2024-07-17T00:28:44.782743Z","steps":["trace[109758002] 'read index received'  (duration: 295.083312ms)","trace[109758002] 'applied index is now lower than readState.Index'  (duration: 163.085µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:28:44.782912Z","caller":"traceutil/trace.go:171","msg":"trace[1149736703] transaction","detail":"{read_only:false; response_revision:1607; number_of_response:1; }","duration":"358.084269ms","start":"2024-07-17T00:28:44.424819Z","end":"2024-07-17T00:28:44.782903Z","steps":["trace[1149736703] 'process raft request'  (duration: 357.79811ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:28:44.783085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:28:44.424804Z","time spent":"358.129455ms","remote":"127.0.0.1:52772","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1599 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-07-17T00:28:44.783095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.580594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6032"}
	{"level":"info","ts":"2024-07-17T00:28:44.783141Z","caller":"traceutil/trace.go:171","msg":"trace[687530247] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1607; }","duration":"197.649579ms","start":"2024-07-17T00:28:44.58548Z","end":"2024-07-17T00:28:44.78313Z","steps":["trace[687530247] 'agreement among raft nodes before linearized reading'  (duration: 197.485575ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:28:44.783253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.756293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:28:44.783267Z","caller":"traceutil/trace.go:171","msg":"trace[1490239454] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1607; }","duration":"295.770618ms","start":"2024-07-17T00:28:44.487492Z","end":"2024-07-17T00:28:44.783262Z","steps":["trace[1490239454] 'agreement among raft nodes before linearized reading'  (duration: 295.746801ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:29:20.401931Z","caller":"traceutil/trace.go:171","msg":"trace[748204291] transaction","detail":"{read_only:false; response_revision:1708; number_of_response:1; }","duration":"204.71526ms","start":"2024-07-17T00:29:20.197167Z","end":"2024-07-17T00:29:20.401883Z","steps":["trace[748204291] 'process raft request'  (duration: 204.35111ms)"],"step_count":1}
	
	
	==> gcp-auth [94408e45cb997f29a2489c6e0dde799186ae8e7b7d275d919d91531c55b579da] <==
	2024/07/17 00:27:58 GCP Auth Webhook started!
	2024/07/17 00:28:04 Ready to marshal response ...
	2024/07/17 00:28:04 Ready to write response ...
	2024/07/17 00:28:04 Ready to marshal response ...
	2024/07/17 00:28:04 Ready to write response ...
	2024/07/17 00:28:06 Ready to marshal response ...
	2024/07/17 00:28:06 Ready to write response ...
	2024/07/17 00:28:06 Ready to marshal response ...
	2024/07/17 00:28:06 Ready to write response ...
	2024/07/17 00:28:06 Ready to marshal response ...
	2024/07/17 00:28:06 Ready to write response ...
	2024/07/17 00:28:09 Ready to marshal response ...
	2024/07/17 00:28:09 Ready to write response ...
	2024/07/17 00:28:11 Ready to marshal response ...
	2024/07/17 00:28:11 Ready to write response ...
	2024/07/17 00:28:30 Ready to marshal response ...
	2024/07/17 00:28:30 Ready to write response ...
	2024/07/17 00:28:31 Ready to marshal response ...
	2024/07/17 00:28:31 Ready to write response ...
	2024/07/17 00:28:37 Ready to marshal response ...
	2024/07/17 00:28:37 Ready to write response ...
	2024/07/17 00:29:12 Ready to marshal response ...
	2024/07/17 00:29:12 Ready to write response ...
	2024/07/17 00:30:53 Ready to marshal response ...
	2024/07/17 00:30:53 Ready to write response ...
	
	
	==> kernel <==
	 00:31:03 up 6 min,  0 users,  load average: 0.30, 1.02, 0.60
	Linux addons-384227 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7] <==
	W0717 00:27:16.724572       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 00:27:16.724620       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0717 00:27:16.725248       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.53.214:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.53.214:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.53.214:443: connect: connection refused
	I0717 00:27:16.786890       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 00:28:06.282963       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.176.27"}
	I0717 00:28:30.863017       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 00:28:31.065868       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.25.103"}
	I0717 00:28:31.204187       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 00:28:32.243857       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 00:28:53.647883       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0717 00:29:28.607937       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:29:28.608121       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:29:28.701715       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:29:28.701795       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:29:28.722578       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:29:28.722688       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:29:28.736253       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:29:28.736303       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:29:28.772744       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:29:28.772893       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 00:29:29.737219       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 00:29:29.773749       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 00:29:29.778400       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 00:30:53.564620       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.95.48"}
	
	
	==> kube-controller-manager [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f] <==
	I0717 00:29:50.386309       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0717 00:29:50.386343       1 shared_informer.go:320] Caches are synced for garbage collector
	W0717 00:30:08.778653       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:30:08.778750       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:30:09.157574       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:30:09.157671       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:30:09.726767       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:30:09.726861       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:30:21.070223       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:30:21.070252       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:30:37.510185       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:30:37.510251       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:30:47.839188       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:30:47.839238       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 00:30:53.398811       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="41.272574ms"
	I0717 00:30:53.407414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="8.4879ms"
	I0717 00:30:53.409146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="46.175µs"
	I0717 00:30:53.412813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="27.338µs"
	I0717 00:30:55.204110       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0717 00:30:55.206509       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="4.812µs"
	I0717 00:30:55.215757       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0717 00:30:57.602589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="14.999288ms"
	I0717 00:30:57.602864       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="48.075µs"
	W0717 00:30:58.508278       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:30:58.508379       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf] <==
	I0717 00:25:22.337633       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:25:22.378491       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.177"]
	I0717 00:25:22.478854       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:25:22.478894       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:25:22.478916       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:25:22.488357       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:25:22.488520       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:25:22.488530       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:25:22.492720       1 config.go:192] "Starting service config controller"
	I0717 00:25:22.492737       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:25:22.492766       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:25:22.492773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:25:22.493358       1 config.go:319] "Starting node config controller"
	I0717 00:25:22.493366       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:25:22.593467       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:25:22.593525       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:25:22.593738       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3] <==
	E0717 00:25:04.366475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:25:04.366563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:25:04.366589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:25:04.367052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:25:05.347816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:25:05.347879       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:25:05.489963       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:25:05.490056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:25:05.504394       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:25:05.504720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:25:05.527057       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:25:05.527158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:25:05.581360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:25:05.581416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:25:05.581500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:25:05.581533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:25:05.586677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:25:05.586779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:25:05.615836       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:25:05.616534       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:25:05.643344       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:25:05.643387       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:25:05.668678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:25:05.668715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 00:25:08.148619       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:30:53 addons-384227 kubelet[1279]: I0717 00:30:53.392256    1279 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0f8500-9872-4d20-9442-c719eae3b46b" containerName="csi-external-health-monitor-controller"
	Jul 17 00:30:53 addons-384227 kubelet[1279]: I0717 00:30:53.420734    1279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/83fa3d37-3105-471c-845f-7da9033760e7-gcp-creds\") pod \"hello-world-app-6778b5fc9f-7dd2l\" (UID: \"83fa3d37-3105-471c-845f-7da9033760e7\") " pod="default/hello-world-app-6778b5fc9f-7dd2l"
	Jul 17 00:30:53 addons-384227 kubelet[1279]: I0717 00:30:53.420800    1279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbvt\" (UniqueName: \"kubernetes.io/projected/83fa3d37-3105-471c-845f-7da9033760e7-kube-api-access-qqbvt\") pod \"hello-world-app-6778b5fc9f-7dd2l\" (UID: \"83fa3d37-3105-471c-845f-7da9033760e7\") " pod="default/hello-world-app-6778b5fc9f-7dd2l"
	Jul 17 00:30:54 addons-384227 kubelet[1279]: I0717 00:30:54.530013    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg2cz\" (UniqueName: \"kubernetes.io/projected/959e53f2-7e3f-452f-b7ce-9f9134926b56-kube-api-access-rg2cz\") pod \"959e53f2-7e3f-452f-b7ce-9f9134926b56\" (UID: \"959e53f2-7e3f-452f-b7ce-9f9134926b56\") "
	Jul 17 00:30:54 addons-384227 kubelet[1279]: I0717 00:30:54.532077    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/959e53f2-7e3f-452f-b7ce-9f9134926b56-kube-api-access-rg2cz" (OuterVolumeSpecName: "kube-api-access-rg2cz") pod "959e53f2-7e3f-452f-b7ce-9f9134926b56" (UID: "959e53f2-7e3f-452f-b7ce-9f9134926b56"). InnerVolumeSpecName "kube-api-access-rg2cz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:30:54 addons-384227 kubelet[1279]: I0717 00:30:54.551831    1279 scope.go:117] "RemoveContainer" containerID="6bd028fb46223e73e0f8053c9e49bf8106bd24979f0942670952f6f71867436a"
	Jul 17 00:30:54 addons-384227 kubelet[1279]: I0717 00:30:54.583372    1279 scope.go:117] "RemoveContainer" containerID="6bd028fb46223e73e0f8053c9e49bf8106bd24979f0942670952f6f71867436a"
	Jul 17 00:30:54 addons-384227 kubelet[1279]: E0717 00:30:54.584467    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bd028fb46223e73e0f8053c9e49bf8106bd24979f0942670952f6f71867436a\": container with ID starting with 6bd028fb46223e73e0f8053c9e49bf8106bd24979f0942670952f6f71867436a not found: ID does not exist" containerID="6bd028fb46223e73e0f8053c9e49bf8106bd24979f0942670952f6f71867436a"
	Jul 17 00:30:54 addons-384227 kubelet[1279]: I0717 00:30:54.584494    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd028fb46223e73e0f8053c9e49bf8106bd24979f0942670952f6f71867436a"} err="failed to get container status \"6bd028fb46223e73e0f8053c9e49bf8106bd24979f0942670952f6f71867436a\": rpc error: code = NotFound desc = could not find container \"6bd028fb46223e73e0f8053c9e49bf8106bd24979f0942670952f6f71867436a\": container with ID starting with 6bd028fb46223e73e0f8053c9e49bf8106bd24979f0942670952f6f71867436a not found: ID does not exist"
	Jul 17 00:30:54 addons-384227 kubelet[1279]: I0717 00:30:54.631917    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rg2cz\" (UniqueName: \"kubernetes.io/projected/959e53f2-7e3f-452f-b7ce-9f9134926b56-kube-api-access-rg2cz\") on node \"addons-384227\" DevicePath \"\""
	Jul 17 00:30:54 addons-384227 kubelet[1279]: I0717 00:30:54.934157    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="959e53f2-7e3f-452f-b7ce-9f9134926b56" path="/var/lib/kubelet/pods/959e53f2-7e3f-452f-b7ce-9f9134926b56/volumes"
	Jul 17 00:30:56 addons-384227 kubelet[1279]: I0717 00:30:56.926928    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e450b6-2b97-4692-b260-2e32356e153e" path="/var/lib/kubelet/pods/42e450b6-2b97-4692-b260-2e32356e153e/volumes"
	Jul 17 00:30:56 addons-384227 kubelet[1279]: I0717 00:30:56.927407    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87213e86-dd23-4707-8d1c-d7bbb58262b9" path="/var/lib/kubelet/pods/87213e86-dd23-4707-8d1c-d7bbb58262b9/volumes"
	Jul 17 00:30:57 addons-384227 kubelet[1279]: I0717 00:30:57.586698    1279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-7dd2l" podStartSLOduration=1.16304131 podStartE2EDuration="4.586681447s" podCreationTimestamp="2024-07-17 00:30:53 +0000 UTC" firstStartedPulling="2024-07-17 00:30:53.963308537 +0000 UTC m=+347.204655405" lastFinishedPulling="2024-07-17 00:30:57.386948681 +0000 UTC m=+350.628295542" observedRunningTime="2024-07-17 00:30:57.586567055 +0000 UTC m=+350.827913930" watchObservedRunningTime="2024-07-17 00:30:57.586681447 +0000 UTC m=+350.828028322"
	Jul 17 00:30:58 addons-384227 kubelet[1279]: I0717 00:30:58.459349    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxpzl\" (UniqueName: \"kubernetes.io/projected/83f8bd7c-5ae3-4764-bfa9-01d0150117a8-kube-api-access-hxpzl\") pod \"83f8bd7c-5ae3-4764-bfa9-01d0150117a8\" (UID: \"83f8bd7c-5ae3-4764-bfa9-01d0150117a8\") "
	Jul 17 00:30:58 addons-384227 kubelet[1279]: I0717 00:30:58.459391    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/83f8bd7c-5ae3-4764-bfa9-01d0150117a8-webhook-cert\") pod \"83f8bd7c-5ae3-4764-bfa9-01d0150117a8\" (UID: \"83f8bd7c-5ae3-4764-bfa9-01d0150117a8\") "
	Jul 17 00:30:58 addons-384227 kubelet[1279]: I0717 00:30:58.461723    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f8bd7c-5ae3-4764-bfa9-01d0150117a8-kube-api-access-hxpzl" (OuterVolumeSpecName: "kube-api-access-hxpzl") pod "83f8bd7c-5ae3-4764-bfa9-01d0150117a8" (UID: "83f8bd7c-5ae3-4764-bfa9-01d0150117a8"). InnerVolumeSpecName "kube-api-access-hxpzl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:30:58 addons-384227 kubelet[1279]: I0717 00:30:58.468188    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83f8bd7c-5ae3-4764-bfa9-01d0150117a8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "83f8bd7c-5ae3-4764-bfa9-01d0150117a8" (UID: "83f8bd7c-5ae3-4764-bfa9-01d0150117a8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 00:30:58 addons-384227 kubelet[1279]: I0717 00:30:58.559588    1279 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/83f8bd7c-5ae3-4764-bfa9-01d0150117a8-webhook-cert\") on node \"addons-384227\" DevicePath \"\""
	Jul 17 00:30:58 addons-384227 kubelet[1279]: I0717 00:30:58.559661    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hxpzl\" (UniqueName: \"kubernetes.io/projected/83f8bd7c-5ae3-4764-bfa9-01d0150117a8-kube-api-access-hxpzl\") on node \"addons-384227\" DevicePath \"\""
	Jul 17 00:30:58 addons-384227 kubelet[1279]: I0717 00:30:58.580347    1279 scope.go:117] "RemoveContainer" containerID="f93b3a182dd8c1d5b125252b11068c72c32a2e626f5f86df4dcdcc20bd5c709a"
	Jul 17 00:30:58 addons-384227 kubelet[1279]: I0717 00:30:58.601585    1279 scope.go:117] "RemoveContainer" containerID="f93b3a182dd8c1d5b125252b11068c72c32a2e626f5f86df4dcdcc20bd5c709a"
	Jul 17 00:30:58 addons-384227 kubelet[1279]: E0717 00:30:58.602102    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93b3a182dd8c1d5b125252b11068c72c32a2e626f5f86df4dcdcc20bd5c709a\": container with ID starting with f93b3a182dd8c1d5b125252b11068c72c32a2e626f5f86df4dcdcc20bd5c709a not found: ID does not exist" containerID="f93b3a182dd8c1d5b125252b11068c72c32a2e626f5f86df4dcdcc20bd5c709a"
	Jul 17 00:30:58 addons-384227 kubelet[1279]: I0717 00:30:58.602130    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93b3a182dd8c1d5b125252b11068c72c32a2e626f5f86df4dcdcc20bd5c709a"} err="failed to get container status \"f93b3a182dd8c1d5b125252b11068c72c32a2e626f5f86df4dcdcc20bd5c709a\": rpc error: code = NotFound desc = could not find container \"f93b3a182dd8c1d5b125252b11068c72c32a2e626f5f86df4dcdcc20bd5c709a\": container with ID starting with f93b3a182dd8c1d5b125252b11068c72c32a2e626f5f86df4dcdcc20bd5c709a not found: ID does not exist"
	Jul 17 00:30:58 addons-384227 kubelet[1279]: I0717 00:30:58.926579    1279 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f8bd7c-5ae3-4764-bfa9-01d0150117a8" path="/var/lib/kubelet/pods/83f8bd7c-5ae3-4764-bfa9-01d0150117a8/volumes"
	
	
	==> storage-provisioner [6f11103cc7df3635c1fbf9fb47b6f3a51eea6db7e78b2157fc67acfc2ea48721] <==
	I0717 00:25:28.523766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:25:28.613560       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:25:28.613636       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:25:28.648755       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:25:28.648899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-384227_6206cfad-ba50-4cc5-8cd6-74e5502921b1!
	I0717 00:25:28.648960       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff60ba14-39d3-4c95-a7ca-43d56f323290", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-384227_6206cfad-ba50-4cc5-8cd6-74e5502921b1 became leader
	I0717 00:25:28.957354       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-384227_6206cfad-ba50-4cc5-8cd6-74e5502921b1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-384227 -n addons-384227
helpers_test.go:261: (dbg) Run:  kubectl --context addons-384227 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (314.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.669638ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-ptnnk" [3c732a54-ac1f-4d2b-8090-29a97aac2ca5] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004695458s
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (65.214128ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 3m13.41312343s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (111.623456ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 3m16.807566152s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (61.500275ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 3m19.449354615s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (69.012275ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 3m25.168879751s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (63.584049ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 3m35.551362849s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (61.292692ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 3m57.978382044s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (63.954844ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 4m17.603771835s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (62.306567ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 4m57.352583627s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (61.724475ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 6m11.160188828s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (60.384626ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 7m21.20313933s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-384227 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-384227 top pods -n kube-system: exit status 1 (61.298667ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-bpp2w, age: 8m19.130713087s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-384227 -n addons-384227
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-384227 logs -n 25: (1.474455134s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-703106                                                                     | download-only-703106 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| delete  | -p download-only-962960                                                                     | download-only-962960 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| delete  | -p download-only-030322                                                                     | download-only-030322 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| delete  | -p download-only-703106                                                                     | download-only-703106 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-874768 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | binary-mirror-874768                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33007                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-874768                                                                     | binary-mirror-874768 | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:24 UTC |
	| addons  | disable dashboard -p                                                                        | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | addons-384227                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC |                     |
	|         | addons-384227                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-384227 --wait=true                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:24 UTC | 17 Jul 24 00:27 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | -p addons-384227                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | addons-384227                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | -p addons-384227                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-384227 ip                                                                            | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	| addons  | addons-384227 addons disable                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-384227 addons disable                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-384227 ssh cat                                                                       | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | /opt/local-path-provisioner/pvc-d8a1bc13-63c9-4ac2-b2eb-d06e01a50e0a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-384227 addons disable                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC | 17 Jul 24 00:28 UTC |
	|         | addons-384227                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-384227 ssh curl -s                                                                   | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:28 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-384227 addons                                                                        | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:29 UTC | 17 Jul 24 00:29 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-384227 addons                                                                        | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:29 UTC | 17 Jul 24 00:29 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-384227 ip                                                                            | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:30 UTC | 17 Jul 24 00:30 UTC |
	| addons  | addons-384227 addons disable                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:30 UTC | 17 Jul 24 00:30 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-384227 addons disable                                                                | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:30 UTC | 17 Jul 24 00:31 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-384227 addons                                                                        | addons-384227        | jenkins | v1.33.1 | 17 Jul 24 00:33 UTC | 17 Jul 24 00:33 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:24:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:24:27.074484   13048 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:24:27.074623   13048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:24:27.074633   13048 out.go:304] Setting ErrFile to fd 2...
	I0717 00:24:27.074637   13048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:24:27.074794   13048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:24:27.075353   13048 out.go:298] Setting JSON to false
	I0717 00:24:27.076131   13048 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":409,"bootTime":1721175458,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:24:27.076184   13048 start.go:139] virtualization: kvm guest
	I0717 00:24:27.078476   13048 out.go:177] * [addons-384227] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:24:27.080506   13048 notify.go:220] Checking for updates...
	I0717 00:24:27.080528   13048 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:24:27.082078   13048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:24:27.083578   13048 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:24:27.085073   13048 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:24:27.086486   13048 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:24:27.087949   13048 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:24:27.089576   13048 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:24:27.121502   13048 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 00:24:27.123031   13048 start.go:297] selected driver: kvm2
	I0717 00:24:27.123054   13048 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:24:27.123065   13048 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:24:27.123715   13048 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:24:27.123790   13048 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:24:27.138046   13048 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:24:27.138086   13048 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:24:27.138285   13048 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:24:27.138339   13048 cni.go:84] Creating CNI manager for ""
	I0717 00:24:27.138350   13048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:24:27.138361   13048 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:24:27.138405   13048 start.go:340] cluster config:
	{Name:addons-384227 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-384227 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:24:27.138505   13048 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:24:27.140516   13048 out.go:177] * Starting "addons-384227" primary control-plane node in "addons-384227" cluster
	I0717 00:24:27.142091   13048 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:24:27.142136   13048 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:24:27.142147   13048 cache.go:56] Caching tarball of preloaded images
	I0717 00:24:27.142214   13048 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:24:27.142224   13048 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:24:27.142525   13048 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/config.json ...
	I0717 00:24:27.142547   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/config.json: {Name:mk37e22c86742f6eea9622c68c2e24dce23ebd10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:27.142718   13048 start.go:360] acquireMachinesLock for addons-384227: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:24:27.142759   13048 start.go:364] duration metric: took 27.969µs to acquireMachinesLock for "addons-384227"
	I0717 00:24:27.142775   13048 start.go:93] Provisioning new machine with config: &{Name:addons-384227 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-384227 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:24:27.142828   13048 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 00:24:27.144468   13048 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 00:24:27.144597   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:24:27.144630   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:24:27.158336   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I0717 00:24:27.158874   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:24:27.159430   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:24:27.159453   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:24:27.159862   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:24:27.160038   13048 main.go:141] libmachine: (addons-384227) Calling .GetMachineName
	I0717 00:24:27.160218   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:27.160358   13048 start.go:159] libmachine.API.Create for "addons-384227" (driver="kvm2")
	I0717 00:24:27.160387   13048 client.go:168] LocalClient.Create starting
	I0717 00:24:27.160436   13048 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 00:24:27.275918   13048 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 00:24:27.438418   13048 main.go:141] libmachine: Running pre-create checks...
	I0717 00:24:27.438439   13048 main.go:141] libmachine: (addons-384227) Calling .PreCreateCheck
	I0717 00:24:27.438994   13048 main.go:141] libmachine: (addons-384227) Calling .GetConfigRaw
	I0717 00:24:27.439408   13048 main.go:141] libmachine: Creating machine...
	I0717 00:24:27.439423   13048 main.go:141] libmachine: (addons-384227) Calling .Create
	I0717 00:24:27.439597   13048 main.go:141] libmachine: (addons-384227) Creating KVM machine...
	I0717 00:24:27.440792   13048 main.go:141] libmachine: (addons-384227) DBG | found existing default KVM network
	I0717 00:24:27.441584   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:27.441454   13070 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002091f0}
	I0717 00:24:27.441626   13048 main.go:141] libmachine: (addons-384227) DBG | created network xml: 
	I0717 00:24:27.441648   13048 main.go:141] libmachine: (addons-384227) DBG | <network>
	I0717 00:24:27.441656   13048 main.go:141] libmachine: (addons-384227) DBG |   <name>mk-addons-384227</name>
	I0717 00:24:27.441687   13048 main.go:141] libmachine: (addons-384227) DBG |   <dns enable='no'/>
	I0717 00:24:27.441701   13048 main.go:141] libmachine: (addons-384227) DBG |   
	I0717 00:24:27.441710   13048 main.go:141] libmachine: (addons-384227) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 00:24:27.441719   13048 main.go:141] libmachine: (addons-384227) DBG |     <dhcp>
	I0717 00:24:27.441728   13048 main.go:141] libmachine: (addons-384227) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 00:24:27.441734   13048 main.go:141] libmachine: (addons-384227) DBG |     </dhcp>
	I0717 00:24:27.441749   13048 main.go:141] libmachine: (addons-384227) DBG |   </ip>
	I0717 00:24:27.441760   13048 main.go:141] libmachine: (addons-384227) DBG |   
	I0717 00:24:27.441770   13048 main.go:141] libmachine: (addons-384227) DBG | </network>
	I0717 00:24:27.441782   13048 main.go:141] libmachine: (addons-384227) DBG | 
	I0717 00:24:27.446926   13048 main.go:141] libmachine: (addons-384227) DBG | trying to create private KVM network mk-addons-384227 192.168.39.0/24...
	I0717 00:24:27.509857   13048 main.go:141] libmachine: (addons-384227) DBG | private KVM network mk-addons-384227 192.168.39.0/24 created
	I0717 00:24:27.509905   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:27.509829   13070 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:24:27.509933   13048 main.go:141] libmachine: (addons-384227) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227 ...
	I0717 00:24:27.509967   13048 main.go:141] libmachine: (addons-384227) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 00:24:27.509986   13048 main.go:141] libmachine: (addons-384227) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 00:24:27.749828   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:27.749712   13070 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa...
	I0717 00:24:27.995097   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:27.994934   13070 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/addons-384227.rawdisk...
	I0717 00:24:27.995131   13048 main.go:141] libmachine: (addons-384227) DBG | Writing magic tar header
	I0717 00:24:27.995146   13048 main.go:141] libmachine: (addons-384227) DBG | Writing SSH key tar header
	I0717 00:24:27.995160   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:27.995041   13070 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227 ...
	I0717 00:24:27.995173   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227 (perms=drwx------)
	I0717 00:24:27.995192   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227
	I0717 00:24:27.995204   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 00:24:27.995213   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:24:27.995223   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 00:24:27.995232   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:24:27.995243   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:24:27.995254   13048 main.go:141] libmachine: (addons-384227) DBG | Checking permissions on dir: /home
	I0717 00:24:27.995265   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:24:27.995280   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 00:24:27.995289   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 00:24:27.995297   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:24:27.995304   13048 main.go:141] libmachine: (addons-384227) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:24:27.995311   13048 main.go:141] libmachine: (addons-384227) Creating domain...
	I0717 00:24:27.995319   13048 main.go:141] libmachine: (addons-384227) DBG | Skipping /home - not owner
	I0717 00:24:27.996216   13048 main.go:141] libmachine: (addons-384227) define libvirt domain using xml: 
	I0717 00:24:27.996240   13048 main.go:141] libmachine: (addons-384227) <domain type='kvm'>
	I0717 00:24:27.996250   13048 main.go:141] libmachine: (addons-384227)   <name>addons-384227</name>
	I0717 00:24:27.996255   13048 main.go:141] libmachine: (addons-384227)   <memory unit='MiB'>4000</memory>
	I0717 00:24:27.996260   13048 main.go:141] libmachine: (addons-384227)   <vcpu>2</vcpu>
	I0717 00:24:27.996271   13048 main.go:141] libmachine: (addons-384227)   <features>
	I0717 00:24:27.996279   13048 main.go:141] libmachine: (addons-384227)     <acpi/>
	I0717 00:24:27.996283   13048 main.go:141] libmachine: (addons-384227)     <apic/>
	I0717 00:24:27.996289   13048 main.go:141] libmachine: (addons-384227)     <pae/>
	I0717 00:24:27.996293   13048 main.go:141] libmachine: (addons-384227)     
	I0717 00:24:27.996298   13048 main.go:141] libmachine: (addons-384227)   </features>
	I0717 00:24:27.996303   13048 main.go:141] libmachine: (addons-384227)   <cpu mode='host-passthrough'>
	I0717 00:24:27.996308   13048 main.go:141] libmachine: (addons-384227)   
	I0717 00:24:27.996317   13048 main.go:141] libmachine: (addons-384227)   </cpu>
	I0717 00:24:27.996347   13048 main.go:141] libmachine: (addons-384227)   <os>
	I0717 00:24:27.996372   13048 main.go:141] libmachine: (addons-384227)     <type>hvm</type>
	I0717 00:24:27.996383   13048 main.go:141] libmachine: (addons-384227)     <boot dev='cdrom'/>
	I0717 00:24:27.996395   13048 main.go:141] libmachine: (addons-384227)     <boot dev='hd'/>
	I0717 00:24:27.996406   13048 main.go:141] libmachine: (addons-384227)     <bootmenu enable='no'/>
	I0717 00:24:27.996416   13048 main.go:141] libmachine: (addons-384227)   </os>
	I0717 00:24:27.996428   13048 main.go:141] libmachine: (addons-384227)   <devices>
	I0717 00:24:27.996438   13048 main.go:141] libmachine: (addons-384227)     <disk type='file' device='cdrom'>
	I0717 00:24:27.996492   13048 main.go:141] libmachine: (addons-384227)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/boot2docker.iso'/>
	I0717 00:24:27.996512   13048 main.go:141] libmachine: (addons-384227)       <target dev='hdc' bus='scsi'/>
	I0717 00:24:27.996518   13048 main.go:141] libmachine: (addons-384227)       <readonly/>
	I0717 00:24:27.996523   13048 main.go:141] libmachine: (addons-384227)     </disk>
	I0717 00:24:27.996534   13048 main.go:141] libmachine: (addons-384227)     <disk type='file' device='disk'>
	I0717 00:24:27.996542   13048 main.go:141] libmachine: (addons-384227)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:24:27.996549   13048 main.go:141] libmachine: (addons-384227)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/addons-384227.rawdisk'/>
	I0717 00:24:27.996559   13048 main.go:141] libmachine: (addons-384227)       <target dev='hda' bus='virtio'/>
	I0717 00:24:27.996564   13048 main.go:141] libmachine: (addons-384227)     </disk>
	I0717 00:24:27.996571   13048 main.go:141] libmachine: (addons-384227)     <interface type='network'>
	I0717 00:24:27.996577   13048 main.go:141] libmachine: (addons-384227)       <source network='mk-addons-384227'/>
	I0717 00:24:27.996583   13048 main.go:141] libmachine: (addons-384227)       <model type='virtio'/>
	I0717 00:24:27.996588   13048 main.go:141] libmachine: (addons-384227)     </interface>
	I0717 00:24:27.996597   13048 main.go:141] libmachine: (addons-384227)     <interface type='network'>
	I0717 00:24:27.996622   13048 main.go:141] libmachine: (addons-384227)       <source network='default'/>
	I0717 00:24:27.996638   13048 main.go:141] libmachine: (addons-384227)       <model type='virtio'/>
	I0717 00:24:27.996646   13048 main.go:141] libmachine: (addons-384227)     </interface>
	I0717 00:24:27.996651   13048 main.go:141] libmachine: (addons-384227)     <serial type='pty'>
	I0717 00:24:27.996670   13048 main.go:141] libmachine: (addons-384227)       <target port='0'/>
	I0717 00:24:27.996677   13048 main.go:141] libmachine: (addons-384227)     </serial>
	I0717 00:24:27.996683   13048 main.go:141] libmachine: (addons-384227)     <console type='pty'>
	I0717 00:24:27.996690   13048 main.go:141] libmachine: (addons-384227)       <target type='serial' port='0'/>
	I0717 00:24:27.996694   13048 main.go:141] libmachine: (addons-384227)     </console>
	I0717 00:24:27.996699   13048 main.go:141] libmachine: (addons-384227)     <rng model='virtio'>
	I0717 00:24:27.996705   13048 main.go:141] libmachine: (addons-384227)       <backend model='random'>/dev/random</backend>
	I0717 00:24:27.996714   13048 main.go:141] libmachine: (addons-384227)     </rng>
	I0717 00:24:27.996719   13048 main.go:141] libmachine: (addons-384227)     
	I0717 00:24:27.996728   13048 main.go:141] libmachine: (addons-384227)     
	I0717 00:24:27.996733   13048 main.go:141] libmachine: (addons-384227)   </devices>
	I0717 00:24:27.996742   13048 main.go:141] libmachine: (addons-384227) </domain>
	I0717 00:24:27.996768   13048 main.go:141] libmachine: (addons-384227) 
	I0717 00:24:28.002420   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:b7:99:98 in network default
	I0717 00:24:28.003003   13048 main.go:141] libmachine: (addons-384227) Ensuring networks are active...
	I0717 00:24:28.003032   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:28.003657   13048 main.go:141] libmachine: (addons-384227) Ensuring network default is active
	I0717 00:24:28.004034   13048 main.go:141] libmachine: (addons-384227) Ensuring network mk-addons-384227 is active
	I0717 00:24:28.004417   13048 main.go:141] libmachine: (addons-384227) Getting domain xml...
	I0717 00:24:28.004980   13048 main.go:141] libmachine: (addons-384227) Creating domain...
	I0717 00:24:29.389775   13048 main.go:141] libmachine: (addons-384227) Waiting to get IP...
	I0717 00:24:29.390561   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:29.391065   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:29.391118   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:29.391050   13070 retry.go:31] will retry after 246.233745ms: waiting for machine to come up
	I0717 00:24:29.638672   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:29.639131   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:29.639158   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:29.639093   13070 retry.go:31] will retry after 350.230795ms: waiting for machine to come up
	I0717 00:24:29.990458   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:29.991013   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:29.991042   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:29.990942   13070 retry.go:31] will retry after 464.494549ms: waiting for machine to come up
	I0717 00:24:30.456415   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:30.456893   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:30.456921   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:30.456845   13070 retry.go:31] will retry after 483.712506ms: waiting for machine to come up
	I0717 00:24:30.942564   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:30.942961   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:30.942993   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:30.942937   13070 retry.go:31] will retry after 746.760134ms: waiting for machine to come up
	I0717 00:24:31.691082   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:31.691522   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:31.691551   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:31.691473   13070 retry.go:31] will retry after 656.464877ms: waiting for machine to come up
	I0717 00:24:32.349740   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:32.350212   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:32.350238   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:32.350143   13070 retry.go:31] will retry after 719.273391ms: waiting for machine to come up
	I0717 00:24:33.070976   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:33.071423   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:33.071445   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:33.071382   13070 retry.go:31] will retry after 1.002819649s: waiting for machine to come up
	I0717 00:24:34.075655   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:34.076036   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:34.076077   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:34.076003   13070 retry.go:31] will retry after 1.361490363s: waiting for machine to come up
	I0717 00:24:35.439381   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:35.439871   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:35.439892   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:35.439830   13070 retry.go:31] will retry after 1.488511708s: waiting for machine to come up
	I0717 00:24:36.930494   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:36.930990   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:36.931019   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:36.930923   13070 retry.go:31] will retry after 2.689620809s: waiting for machine to come up
	I0717 00:24:39.623559   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:39.624033   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:39.624062   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:39.623976   13070 retry.go:31] will retry after 3.048939201s: waiting for machine to come up
	I0717 00:24:42.674622   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:42.675028   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:42.675052   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:42.674938   13070 retry.go:31] will retry after 3.06125912s: waiting for machine to come up
	I0717 00:24:45.739956   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:45.740374   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find current IP address of domain addons-384227 in network mk-addons-384227
	I0717 00:24:45.740395   13048 main.go:141] libmachine: (addons-384227) DBG | I0717 00:24:45.740329   13070 retry.go:31] will retry after 3.704664568s: waiting for machine to come up
	I0717 00:24:49.447678   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.448178   13048 main.go:141] libmachine: (addons-384227) Found IP for machine: 192.168.39.177
	I0717 00:24:49.448194   13048 main.go:141] libmachine: (addons-384227) Reserving static IP address...
	I0717 00:24:49.448202   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has current primary IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.448622   13048 main.go:141] libmachine: (addons-384227) DBG | unable to find host DHCP lease matching {name: "addons-384227", mac: "52:54:00:88:64:cd", ip: "192.168.39.177"} in network mk-addons-384227
	I0717 00:24:49.519355   13048 main.go:141] libmachine: (addons-384227) DBG | Getting to WaitForSSH function...
	I0717 00:24:49.519436   13048 main.go:141] libmachine: (addons-384227) Reserved static IP address: 192.168.39.177
	I0717 00:24:49.519487   13048 main.go:141] libmachine: (addons-384227) Waiting for SSH to be available...
	I0717 00:24:49.521718   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.522182   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:49.522213   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.522309   13048 main.go:141] libmachine: (addons-384227) DBG | Using SSH client type: external
	I0717 00:24:49.522335   13048 main.go:141] libmachine: (addons-384227) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa (-rw-------)
	I0717 00:24:49.522379   13048 main.go:141] libmachine: (addons-384227) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:24:49.522400   13048 main.go:141] libmachine: (addons-384227) DBG | About to run SSH command:
	I0717 00:24:49.522411   13048 main.go:141] libmachine: (addons-384227) DBG | exit 0
	I0717 00:24:49.650725   13048 main.go:141] libmachine: (addons-384227) DBG | SSH cmd err, output: <nil>: 
	I0717 00:24:49.650974   13048 main.go:141] libmachine: (addons-384227) KVM machine creation complete!
	I0717 00:24:49.651284   13048 main.go:141] libmachine: (addons-384227) Calling .GetConfigRaw
	I0717 00:24:49.651805   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:49.651997   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:49.652159   13048 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:24:49.652174   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:24:49.653307   13048 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:24:49.653321   13048 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:24:49.653326   13048 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:24:49.653331   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:49.655423   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.655740   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:49.655777   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.655869   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:49.656033   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.656178   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.656284   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:49.656443   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:49.656628   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:49.656639   13048 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:24:49.753698   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:24:49.753716   13048 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:24:49.753724   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:49.756257   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.756672   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:49.756695   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.756868   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:49.757058   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.757212   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.757326   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:49.757527   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:49.757691   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:49.757700   13048 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:24:49.859259   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:24:49.859361   13048 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:24:49.859379   13048 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:24:49.859391   13048 main.go:141] libmachine: (addons-384227) Calling .GetMachineName
	I0717 00:24:49.859626   13048 buildroot.go:166] provisioning hostname "addons-384227"
	I0717 00:24:49.859651   13048 main.go:141] libmachine: (addons-384227) Calling .GetMachineName
	I0717 00:24:49.859802   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:49.862892   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.863299   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:49.863323   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.863481   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:49.863672   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.863801   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.863922   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:49.864083   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:49.864301   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:49.864319   13048 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-384227 && echo "addons-384227" | sudo tee /etc/hostname
	I0717 00:24:49.976533   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-384227
	
	I0717 00:24:49.976554   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:49.979356   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.979659   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:49.979685   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:49.979838   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:49.980027   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.980210   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:49.980319   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:49.980478   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:49.980626   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:49.980641   13048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-384227' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-384227/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-384227' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:24:50.087029   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:24:50.087062   13048 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 00:24:50.087081   13048 buildroot.go:174] setting up certificates
	I0717 00:24:50.087101   13048 provision.go:84] configureAuth start
	I0717 00:24:50.087112   13048 main.go:141] libmachine: (addons-384227) Calling .GetMachineName
	I0717 00:24:50.087355   13048 main.go:141] libmachine: (addons-384227) Calling .GetIP
	I0717 00:24:50.090271   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.090619   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.090645   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.090775   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.092710   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.093092   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.093120   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.093222   13048 provision.go:143] copyHostCerts
	I0717 00:24:50.093306   13048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 00:24:50.093444   13048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 00:24:50.093512   13048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 00:24:50.093569   13048 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.addons-384227 san=[127.0.0.1 192.168.39.177 addons-384227 localhost minikube]
	I0717 00:24:50.245507   13048 provision.go:177] copyRemoteCerts
	I0717 00:24:50.245576   13048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:24:50.245604   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.248299   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.248595   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.248618   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.248802   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.248980   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.249124   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.249255   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:24:50.329304   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:24:50.353218   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:24:50.375853   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:24:50.398480   13048 provision.go:87] duration metric: took 311.36337ms to configureAuth
	I0717 00:24:50.398514   13048 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:24:50.398719   13048 config.go:182] Loaded profile config "addons-384227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:24:50.398799   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.401391   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.401699   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.401721   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.402060   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.402245   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.402435   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.402587   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.402737   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:50.402890   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:50.402904   13048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:24:50.657068   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:24:50.657100   13048 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:24:50.657110   13048 main.go:141] libmachine: (addons-384227) Calling .GetURL
	I0717 00:24:50.658487   13048 main.go:141] libmachine: (addons-384227) DBG | Using libvirt version 6000000
	I0717 00:24:50.660679   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.660935   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.660964   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.661150   13048 main.go:141] libmachine: Docker is up and running!
	I0717 00:24:50.661166   13048 main.go:141] libmachine: Reticulating splines...
	I0717 00:24:50.661172   13048 client.go:171] duration metric: took 23.500775223s to LocalClient.Create
	I0717 00:24:50.661194   13048 start.go:167] duration metric: took 23.500838094s to libmachine.API.Create "addons-384227"
	I0717 00:24:50.661212   13048 start.go:293] postStartSetup for "addons-384227" (driver="kvm2")
	I0717 00:24:50.661223   13048 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:24:50.661245   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:50.661478   13048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:24:50.661500   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.663584   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.663952   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.663983   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.664123   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.664293   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.664440   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.664575   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:24:50.745266   13048 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:24:50.749501   13048 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:24:50.749526   13048 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 00:24:50.749591   13048 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 00:24:50.749612   13048 start.go:296] duration metric: took 88.394917ms for postStartSetup
	I0717 00:24:50.749641   13048 main.go:141] libmachine: (addons-384227) Calling .GetConfigRaw
	I0717 00:24:50.750313   13048 main.go:141] libmachine: (addons-384227) Calling .GetIP
	I0717 00:24:50.752448   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.752927   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.752954   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.753237   13048 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/config.json ...
	I0717 00:24:50.753417   13048 start.go:128] duration metric: took 23.610580206s to createHost
	I0717 00:24:50.753438   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.755334   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.755581   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.755606   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.755731   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.755908   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.756053   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.756169   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.756303   13048 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:50.756507   13048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0717 00:24:50.756520   13048 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:24:50.855130   13048 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721175890.830052569
	
	I0717 00:24:50.855157   13048 fix.go:216] guest clock: 1721175890.830052569
	I0717 00:24:50.855164   13048 fix.go:229] Guest: 2024-07-17 00:24:50.830052569 +0000 UTC Remote: 2024-07-17 00:24:50.753429482 +0000 UTC m=+23.711520667 (delta=76.623087ms)
	I0717 00:24:50.855200   13048 fix.go:200] guest clock delta is within tolerance: 76.623087ms
	I0717 00:24:50.855206   13048 start.go:83] releasing machines lock for "addons-384227", held for 23.71243843s
	I0717 00:24:50.855226   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:50.855470   13048 main.go:141] libmachine: (addons-384227) Calling .GetIP
	I0717 00:24:50.857887   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.858179   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.858203   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.858307   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:50.858804   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:50.858968   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:24:50.859055   13048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:24:50.859100   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.859133   13048 ssh_runner.go:195] Run: cat /version.json
	I0717 00:24:50.859153   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:24:50.861628   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.861864   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.862042   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.862068   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.862196   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:50.862207   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.862219   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:50.862338   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:24:50.862421   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.862508   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:24:50.862595   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.862664   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:24:50.862743   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:24:50.862766   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:24:50.962290   13048 ssh_runner.go:195] Run: systemctl --version
	I0717 00:24:50.968296   13048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:24:51.126060   13048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:24:51.132798   13048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:24:51.132862   13048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:24:51.148988   13048 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:24:51.149013   13048 start.go:495] detecting cgroup driver to use...
	I0717 00:24:51.149072   13048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:24:51.165585   13048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:24:51.178934   13048 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:24:51.179047   13048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:24:51.193373   13048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:24:51.207755   13048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:24:51.325012   13048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:24:51.477331   13048 docker.go:233] disabling docker service ...
	I0717 00:24:51.477390   13048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:24:51.491571   13048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:24:51.504024   13048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:24:51.615884   13048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:24:51.727827   13048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:24:51.741500   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:24:51.759822   13048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:24:51.759883   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.769890   13048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:24:51.769959   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.779866   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.789639   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.799615   13048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:24:51.809757   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.819292   13048 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.836347   13048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:24:51.846423   13048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:24:51.855588   13048 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:24:51.855639   13048 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:24:51.869161   13048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:24:51.879221   13048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:24:52.004837   13048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:24:52.145394   13048 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:24:52.145489   13048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:24:52.150717   13048 start.go:563] Will wait 60s for crictl version
	I0717 00:24:52.150783   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:24:52.154425   13048 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:24:52.192719   13048 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:24:52.192911   13048 ssh_runner.go:195] Run: crio --version
	I0717 00:24:52.221078   13048 ssh_runner.go:195] Run: crio --version
	I0717 00:24:52.251518   13048 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:24:52.252872   13048 main.go:141] libmachine: (addons-384227) Calling .GetIP
	I0717 00:24:52.255559   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:52.255913   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:24:52.255944   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:24:52.256189   13048 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:24:52.260455   13048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:24:52.273283   13048 kubeadm.go:883] updating cluster {Name:addons-384227 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-384227 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:24:52.273390   13048 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:24:52.273430   13048 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:24:52.312412   13048 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 00:24:52.312475   13048 ssh_runner.go:195] Run: which lz4
	I0717 00:24:52.316511   13048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 00:24:52.320888   13048 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 00:24:52.320913   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 00:24:53.634306   13048 crio.go:462] duration metric: took 1.317846548s to copy over tarball
	I0717 00:24:53.634376   13048 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 00:24:55.850447   13048 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216037589s)
	I0717 00:24:55.850478   13048 crio.go:469] duration metric: took 2.216140314s to extract the tarball
	I0717 00:24:55.850486   13048 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 00:24:55.887433   13048 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:24:55.930501   13048 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:24:55.930529   13048 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:24:55.930538   13048 kubeadm.go:934] updating node { 192.168.39.177 8443 v1.30.2 crio true true} ...
	I0717 00:24:55.930658   13048 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-384227 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-384227 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:24:55.930723   13048 ssh_runner.go:195] Run: crio config
	I0717 00:24:55.979197   13048 cni.go:84] Creating CNI manager for ""
	I0717 00:24:55.979216   13048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:24:55.979225   13048 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:24:55.979246   13048 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-384227 NodeName:addons-384227 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:24:55.979393   13048 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-384227"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:24:55.979456   13048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:24:55.989837   13048 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:24:55.989927   13048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 00:24:55.999930   13048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 00:24:56.016561   13048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:24:56.033358   13048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 00:24:56.051114   13048 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0717 00:24:56.055034   13048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:24:56.067791   13048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:24:56.174091   13048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:24:56.190746   13048 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227 for IP: 192.168.39.177
	I0717 00:24:56.190775   13048 certs.go:194] generating shared ca certs ...
	I0717 00:24:56.190795   13048 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.190955   13048 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 00:24:56.326933   13048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt ...
	I0717 00:24:56.326958   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt: {Name:mk258a46a5713f26153e605f2d884d6e7ef80003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.327105   13048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key ...
	I0717 00:24:56.327116   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key: {Name:mk9083a7e0fe98917431b3190905867364dd8b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.327182   13048 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 00:24:56.473376   13048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt ...
	I0717 00:24:56.473416   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt: {Name:mka28ea6d0f65a1c140504565547138f6126280c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.473594   13048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key ...
	I0717 00:24:56.473606   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key: {Name:mkddc4a44c93a52e6572635130020cbccf1d61b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.473690   13048 certs.go:256] generating profile certs ...
	I0717 00:24:56.473746   13048 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.key
	I0717 00:24:56.473760   13048 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt with IP's: []
	I0717 00:24:56.660112   13048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt ...
	I0717 00:24:56.660142   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: {Name:mk6b65975ff55efb4753dd731d23404a51ffe89a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.660302   13048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.key ...
	I0717 00:24:56.660314   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.key: {Name:mk258e4fb88472f01219677da00429ea5fea7295 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.660402   13048 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key.b2f88573
	I0717 00:24:56.660422   13048 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt.b2f88573 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.177]
	I0717 00:24:56.843116   13048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt.b2f88573 ...
	I0717 00:24:56.843153   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt.b2f88573: {Name:mk59264558e76f88ee226559537379da65256757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.843329   13048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key.b2f88573 ...
	I0717 00:24:56.843349   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key.b2f88573: {Name:mkb79f9f557ee7bdd6e95f63f8999c69aee180ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:56.843443   13048 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt.b2f88573 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt
	I0717 00:24:56.843528   13048 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key.b2f88573 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key
	I0717 00:24:56.843594   13048 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.key
	I0717 00:24:56.843620   13048 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.crt with IP's: []
	I0717 00:24:57.081780   13048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.crt ...
	I0717 00:24:57.081810   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.crt: {Name:mkf5b9bb5210d2ce6aac943985403366d774267a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:57.081976   13048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.key ...
	I0717 00:24:57.081986   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.key: {Name:mk525dd28bed580f969ad9baa95ea678f3eb2f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:57.082138   13048 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:24:57.082169   13048 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:24:57.082193   13048 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:24:57.082219   13048 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 00:24:57.082801   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:24:57.108369   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:24:57.133136   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:24:57.156947   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:24:57.180048   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 00:24:57.204784   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:24:57.228054   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:24:57.251721   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:24:57.274628   13048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:24:57.297347   13048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:24:57.313941   13048 ssh_runner.go:195] Run: openssl version
	I0717 00:24:57.319886   13048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:24:57.330817   13048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:24:57.335303   13048 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:24:57.335345   13048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:24:57.341016   13048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:24:57.351908   13048 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:24:57.356053   13048 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:24:57.356106   13048 kubeadm.go:392] StartCluster: {Name:addons-384227 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-384227 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:24:57.356184   13048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:24:57.356220   13048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:24:57.392335   13048 cri.go:89] found id: ""
	I0717 00:24:57.392408   13048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:24:57.402614   13048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:24:57.412405   13048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:24:57.422095   13048 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:24:57.422116   13048 kubeadm.go:157] found existing configuration files:
	
	I0717 00:24:57.422159   13048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:24:57.431446   13048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:24:57.431519   13048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:24:57.441548   13048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:24:57.450694   13048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:24:57.450747   13048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:24:57.460084   13048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:24:57.469608   13048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:24:57.469664   13048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:24:57.481080   13048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:24:57.490303   13048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:24:57.490360   13048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:24:57.500244   13048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 00:24:57.556577   13048 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:24:57.556638   13048 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:24:57.701465   13048 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:24:57.701628   13048 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:24:57.701770   13048 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:24:57.946856   13048 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:24:58.078249   13048 out.go:204]   - Generating certificates and keys ...
	I0717 00:24:58.078372   13048 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:24:58.078464   13048 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:24:58.078566   13048 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:24:58.156168   13048 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:24:58.441296   13048 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:24:58.557821   13048 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:24:58.810280   13048 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:24:58.810427   13048 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-384227 localhost] and IPs [192.168.39.177 127.0.0.1 ::1]
	I0717 00:24:59.009271   13048 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:24:59.009417   13048 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-384227 localhost] and IPs [192.168.39.177 127.0.0.1 ::1]
	I0717 00:24:59.082328   13048 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:24:59.230252   13048 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:24:59.332311   13048 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:24:59.332849   13048 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:24:59.976606   13048 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:25:00.196447   13048 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:25:00.287327   13048 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:25:00.455814   13048 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:25:00.541348   13048 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:25:00.542004   13048 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:25:00.544302   13048 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:25:00.546723   13048 out.go:204]   - Booting up control plane ...
	I0717 00:25:00.546804   13048 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:25:00.546870   13048 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:25:00.546927   13048 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:25:00.561864   13048 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:25:00.562092   13048 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:25:00.562139   13048 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:25:00.683809   13048 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:25:00.683900   13048 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:25:01.185217   13048 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.080076ms
	I0717 00:25:01.185301   13048 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:25:06.186102   13048 kubeadm.go:310] [api-check] The API server is healthy after 5.001614333s
	I0717 00:25:06.201565   13048 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:25:06.216098   13048 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:25:06.240498   13048 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:25:06.240662   13048 kubeadm.go:310] [mark-control-plane] Marking the node addons-384227 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:25:06.253199   13048 kubeadm.go:310] [bootstrap-token] Using token: 28ri84.7ntcu425oc9olq2s
	I0717 00:25:06.254546   13048 out.go:204]   - Configuring RBAC rules ...
	I0717 00:25:06.254665   13048 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:25:06.259866   13048 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:25:06.274669   13048 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:25:06.279051   13048 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:25:06.283669   13048 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:25:06.288523   13048 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:25:06.594335   13048 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:25:07.032027   13048 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:25:07.594254   13048 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:25:07.595354   13048 kubeadm.go:310] 
	I0717 00:25:07.595428   13048 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:25:07.595438   13048 kubeadm.go:310] 
	I0717 00:25:07.595515   13048 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:25:07.595524   13048 kubeadm.go:310] 
	I0717 00:25:07.595574   13048 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:25:07.595638   13048 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:25:07.595709   13048 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:25:07.595724   13048 kubeadm.go:310] 
	I0717 00:25:07.595772   13048 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:25:07.595782   13048 kubeadm.go:310] 
	I0717 00:25:07.595821   13048 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:25:07.595832   13048 kubeadm.go:310] 
	I0717 00:25:07.595875   13048 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:25:07.595934   13048 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:25:07.596009   13048 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:25:07.596020   13048 kubeadm.go:310] 
	I0717 00:25:07.596119   13048 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:25:07.596215   13048 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:25:07.596222   13048 kubeadm.go:310] 
	I0717 00:25:07.596291   13048 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 28ri84.7ntcu425oc9olq2s \
	I0717 00:25:07.596370   13048 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 \
	I0717 00:25:07.596388   13048 kubeadm.go:310] 	--control-plane 
	I0717 00:25:07.596404   13048 kubeadm.go:310] 
	I0717 00:25:07.596509   13048 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:25:07.596518   13048 kubeadm.go:310] 
	I0717 00:25:07.596623   13048 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 28ri84.7ntcu425oc9olq2s \
	I0717 00:25:07.596734   13048 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 
	I0717 00:25:07.597395   13048 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:25:07.597529   13048 cni.go:84] Creating CNI manager for ""
	I0717 00:25:07.597548   13048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:25:07.599503   13048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 00:25:07.600867   13048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 00:25:07.611291   13048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 00:25:07.630143   13048 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:25:07.630236   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:07.630277   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-384227 minikube.k8s.io/updated_at=2024_07_17T00_25_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=addons-384227 minikube.k8s.io/primary=true
	I0717 00:25:07.650778   13048 ops.go:34] apiserver oom_adj: -16
	I0717 00:25:07.768304   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:08.268738   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:08.768568   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:09.269088   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:09.769123   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:10.268728   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:10.768660   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:11.269078   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:11.769225   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:12.268470   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:12.768544   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:13.268891   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:13.769236   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:14.268636   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:14.768980   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:15.268523   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:15.769312   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:16.269217   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:16.768519   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:17.268824   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:17.769201   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:18.269301   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:18.768466   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:19.268404   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:19.768635   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:20.268990   13048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:25:20.351678   13048 kubeadm.go:1113] duration metric: took 12.721491008s to wait for elevateKubeSystemPrivileges
	I0717 00:25:20.351718   13048 kubeadm.go:394] duration metric: took 22.995616848s to StartCluster
	I0717 00:25:20.351739   13048 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:25:20.351864   13048 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:25:20.352239   13048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:25:20.352409   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:25:20.352434   13048 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:25:20.352492   13048 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 00:25:20.352590   13048 addons.go:69] Setting yakd=true in profile "addons-384227"
	I0717 00:25:20.352619   13048 addons.go:234] Setting addon yakd=true in "addons-384227"
	I0717 00:25:20.352621   13048 addons.go:69] Setting inspektor-gadget=true in profile "addons-384227"
	I0717 00:25:20.352630   13048 addons.go:69] Setting gcp-auth=true in profile "addons-384227"
	I0717 00:25:20.352645   13048 addons.go:69] Setting storage-provisioner=true in profile "addons-384227"
	I0717 00:25:20.352658   13048 mustload.go:65] Loading cluster: addons-384227
	I0717 00:25:20.352660   13048 addons.go:234] Setting addon inspektor-gadget=true in "addons-384227"
	I0717 00:25:20.352671   13048 addons.go:234] Setting addon storage-provisioner=true in "addons-384227"
	I0717 00:25:20.352681   13048 addons.go:69] Setting ingress=true in profile "addons-384227"
	I0717 00:25:20.352691   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.352696   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.352631   13048 config.go:182] Loaded profile config "addons-384227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:25:20.352688   13048 addons.go:69] Setting helm-tiller=true in profile "addons-384227"
	I0717 00:25:20.352709   13048 addons.go:234] Setting addon ingress=true in "addons-384227"
	I0717 00:25:20.352727   13048 addons.go:234] Setting addon helm-tiller=true in "addons-384227"
	I0717 00:25:20.352740   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.352757   13048 addons.go:69] Setting ingress-dns=true in profile "addons-384227"
	I0717 00:25:20.352769   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.352777   13048 addons.go:234] Setting addon ingress-dns=true in "addons-384227"
	I0717 00:25:20.352798   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.352853   13048 config.go:182] Loaded profile config "addons-384227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:25:20.353113   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353119   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353125   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353131   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353150   13048 addons.go:69] Setting metrics-server=true in profile "addons-384227"
	I0717 00:25:20.353151   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353160   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353159   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353169   13048 addons.go:234] Setting addon metrics-server=true in "addons-384227"
	I0717 00:25:20.353189   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353193   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353203   13048 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-384227"
	I0717 00:25:20.353152   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353223   13048 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-384227"
	I0717 00:25:20.353228   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353239   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353191   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353247   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353257   13048 addons.go:69] Setting volcano=true in profile "addons-384227"
	I0717 00:25:20.353277   13048 addons.go:234] Setting addon volcano=true in "addons-384227"
	I0717 00:25:20.353298   13048 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-384227"
	I0717 00:25:20.353314   13048 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-384227"
	I0717 00:25:20.353399   13048 addons.go:69] Setting volumesnapshots=true in profile "addons-384227"
	I0717 00:25:20.353428   13048 addons.go:234] Setting addon volumesnapshots=true in "addons-384227"
	I0717 00:25:20.353447   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353481   13048 addons.go:69] Setting registry=true in profile "addons-384227"
	I0717 00:25:20.353503   13048 addons.go:234] Setting addon registry=true in "addons-384227"
	I0717 00:25:20.353539   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353543   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353563   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353566   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353579   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.352696   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353625   13048 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-384227"
	I0717 00:25:20.353636   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353640   13048 addons.go:69] Setting default-storageclass=true in profile "addons-384227"
	I0717 00:25:20.353653   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353659   13048 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-384227"
	I0717 00:25:20.353662   13048 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-384227"
	I0717 00:25:20.353669   13048 addons.go:69] Setting cloud-spanner=true in profile "addons-384227"
	I0717 00:25:20.353684   13048 addons.go:234] Setting addon cloud-spanner=true in "addons-384227"
	I0717 00:25:20.353893   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353902   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353917   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353925   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.353924   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.353903   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.353978   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.354012   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.354046   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.354019   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.354238   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.354256   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.354261   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.354468   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.354485   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.354615   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.354635   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.354833   13048 out.go:177] * Verifying Kubernetes components...
	I0717 00:25:20.365249   13048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:25:20.380791   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0717 00:25:20.380948   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45613
	I0717 00:25:20.381021   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0717 00:25:20.381085   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33701
	I0717 00:25:20.381428   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.381554   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.382087   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.382112   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.382250   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.382272   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.382336   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.382413   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.382661   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.382838   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.382855   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.382990   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.382990   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.383043   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.383181   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.383243   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.383265   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.383436   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.383568   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.383607   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.384973   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I0717 00:25:20.385296   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.385621   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.385852   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.386188   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.386218   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.389096   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.389118   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.389351   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.389394   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.389557   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.390053   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.390087   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.407535   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36057
	I0717 00:25:20.408107   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.408707   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.408727   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.409122   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.409677   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.409718   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.412899   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0717 00:25:20.413385   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.413953   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.413972   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.414362   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.414925   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.414969   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.415738   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I0717 00:25:20.416186   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.416640   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.416665   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.417035   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.417639   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.417705   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.419965   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33031
	I0717 00:25:20.420451   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.421057   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.421084   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.421472   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.421692   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.424198   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0717 00:25:20.424738   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.425293   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I0717 00:25:20.425588   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.425606   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.426080   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.426318   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.426570   13048 addons.go:234] Setting addon default-storageclass=true in "addons-384227"
	I0717 00:25:20.426614   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.427922   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.427961   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.429751   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.430954   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.430973   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.431474   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.431803   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.434542   13048 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-384227"
	I0717 00:25:20.434608   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:20.434985   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.435046   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.436490   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0717 00:25:20.436617   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I0717 00:25:20.437000   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0717 00:25:20.437327   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.437407   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.437855   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.437877   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.437965   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.438358   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.438375   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.438422   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.439077   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.439111   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.439428   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.439442   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.439514   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38663
	I0717 00:25:20.439949   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.440383   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
	I0717 00:25:20.440480   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.440488   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.440510   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.441149   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.441165   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.441224   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.441288   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45205
	I0717 00:25:20.441727   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.441846   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.441857   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.442055   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.442686   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.443194   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.443255   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.444545   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.445197   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36181
	I0717 00:25:20.445562   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.445858   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.446292   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0717 00:25:20.446579   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.446594   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.446668   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.446760   13048 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:25:20.446951   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.446973   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.447045   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.447536   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.447578   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.447790   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.447952   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.447971   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.448285   13048 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:25:20.448305   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:25:20.448324   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.448431   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0717 00:25:20.448522   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.448550   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.448770   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.448826   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.448878   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.449193   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.451076   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.452051   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.452748   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.453016   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.453559   13048 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 00:25:20.453611   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.453632   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.453815   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.453959   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.454112   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.454260   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.454501   13048 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 00:25:20.454696   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.454712   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.455530   13048 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:25:20.455635   13048 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 00:25:20.455650   13048 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 00:25:20.455668   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.456158   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.457649   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.457862   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.457907   13048 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:25:20.459223   13048 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:25:20.459243   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 00:25:20.459261   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.459594   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.461815   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33925
	I0717 00:25:20.462242   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.462264   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.462279   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40679
	I0717 00:25:20.462738   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.466083   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.466103   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.466108   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.466117   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.466135   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.466089   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.466273   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.466322   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.466456   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.466512   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.466670   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.466683   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.466774   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.466788   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.467265   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.467288   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.467657   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.467661   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.467853   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.468274   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.468298   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.470618   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.471095   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I0717 00:25:20.472654   13048 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 00:25:20.474037   13048 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:25:20.474048   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.474058   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 00:25:20.474078   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.475268   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.475291   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.476067   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.476425   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.477839   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0717 00:25:20.477895   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.478224   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.478421   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.478442   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.478798   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.479011   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.479026   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.479089   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.479250   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.479373   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.479435   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.479858   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.480245   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.480314   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39249
	I0717 00:25:20.480803   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.481544   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.481568   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.481924   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.482078   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.482715   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0717 00:25:20.483109   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.483336   13048 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 00:25:20.483471   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.483590   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.483604   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.483981   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.484116   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.484781   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.484935   13048 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:25:20.484955   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 00:25:20.484972   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.486398   13048 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 00:25:20.486463   13048 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 00:25:20.487380   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.487670   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:20.487689   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:20.487725   13048 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 00:25:20.487745   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 00:25:20.487769   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.487845   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:20.487859   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:20.487867   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:20.487897   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:20.489048   13048 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 00:25:20.490114   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:20.490132   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:20.490245   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	W0717 00:25:20.490298   13048 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 00:25:20.490304   13048 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 00:25:20.490319   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 00:25:20.490344   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.491475   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.491935   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.491973   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.492527   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.492907   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.492940   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.493725   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.493685   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.493895   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.494213   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.494230   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.494262   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.494312   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.494325   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.494353   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0717 00:25:20.494504   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I0717 00:25:20.494628   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.494789   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.494803   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.494837   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.495135   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.495187   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.495227   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.495241   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.495304   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39575
	I0717 00:25:20.495322   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.495334   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.495558   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.495800   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.495861   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.495875   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.495919   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.496005   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.496299   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.496314   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.496636   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.496657   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.496921   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.497640   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.497667   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.499124   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.499134   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I0717 00:25:20.499127   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46743
	I0717 00:25:20.499484   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.499634   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.499775   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 00:25:20.499794   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.499944   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.499959   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.500128   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.500261   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.500446   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.500599   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.501012   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.501588   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:20.501614   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:20.501741   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 00:25:20.501765   13048 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 00:25:20.501773   13048 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 00:25:20.501749   13048 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 00:25:20.501865   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.501858   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.503361   13048 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 00:25:20.503378   13048 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 00:25:20.503386   13048 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 00:25:20.503396   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.504504   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.504888   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.504907   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.505070   13048 out.go:177]   - Using image docker.io/busybox:stable
	I0717 00:25:20.505123   13048 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 00:25:20.505140   13048 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 00:25:20.505155   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.505163   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.505343   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.505489   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.505619   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.506402   13048 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:25:20.506417   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 00:25:20.506431   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.506925   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.507413   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.507447   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.507670   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.507858   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.508039   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.508203   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.510071   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.510135   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0717 00:25:20.510469   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.510511   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.510713   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.510761   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.510919   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.510953   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.511128   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.511241   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.511492   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.511502   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.511522   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.511547   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.511790   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.511880   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.511920   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.511979   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.512447   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.512587   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.513374   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	W0717 00:25:20.513546   13048 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34920->192.168.39.177:22: read: connection reset by peer
	I0717 00:25:20.513574   13048 retry.go:31] will retry after 305.964808ms: ssh: handshake failed: read tcp 192.168.39.1:34920->192.168.39.177:22: read: connection reset by peer
	I0717 00:25:20.515164   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 00:25:20.516412   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 00:25:20.517491   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 00:25:20.518484   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33995
	I0717 00:25:20.518900   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.519273   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.519288   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.519591   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 00:25:20.519608   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.519773   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.521338   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.521682   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 00:25:20.522945   13048 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 00:25:20.523123   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I0717 00:25:20.523492   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:20.523957   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:20.523980   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:20.524031   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 00:25:20.524175   13048 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 00:25:20.524187   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 00:25:20.524202   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.524323   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:20.524505   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:20.526344   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 00:25:20.527401   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:20.527449   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.527621   13048 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:25:20.527634   13048 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:25:20.527648   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.527867   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.527971   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.528149   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.528325   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.528516   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.528715   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.529116   13048 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	W0717 00:25:20.529718   13048 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0717 00:25:20.529742   13048 retry.go:31] will retry after 172.735909ms: ssh: handshake failed: EOF
	I0717 00:25:20.530610   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.530628   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 00:25:20.530652   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 00:25:20.530669   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:20.531031   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.531059   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.531333   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.531513   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.532319   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.532492   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.533295   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:20.533679   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:20.533713   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	W0717 00:25:20.533844   13048 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34936->192.168.39.177:22: read: connection reset by peer
	I0717 00:25:20.533863   13048 retry.go:31] will retry after 352.184484ms: ssh: handshake failed: read tcp 192.168.39.1:34936->192.168.39.177:22: read: connection reset by peer
	I0717 00:25:20.533898   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:20.534085   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:20.534191   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:20.534306   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:20.822902   13048 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 00:25:20.822927   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 00:25:20.894827   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 00:25:20.894857   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 00:25:20.910829   13048 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 00:25:20.910849   13048 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 00:25:20.934980   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:25:20.956411   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:25:20.958647   13048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:25:20.958864   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:25:20.965990   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:25:20.975282   13048 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 00:25:20.975306   13048 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 00:25:20.986021   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:25:20.991970   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 00:25:21.010172   13048 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 00:25:21.010200   13048 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 00:25:21.038232   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:25:21.042110   13048 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 00:25:21.042129   13048 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 00:25:21.101887   13048 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 00:25:21.101910   13048 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 00:25:21.112758   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 00:25:21.112776   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 00:25:21.113860   13048 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:25:21.113878   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 00:25:21.216093   13048 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 00:25:21.216120   13048 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 00:25:21.245690   13048 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:25:21.245711   13048 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 00:25:21.251563   13048 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 00:25:21.251583   13048 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 00:25:21.343508   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:25:21.343711   13048 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:25:21.343726   13048 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 00:25:21.349046   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:25:21.389693   13048 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 00:25:21.389722   13048 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 00:25:21.394478   13048 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 00:25:21.394501   13048 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 00:25:21.419267   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 00:25:21.419292   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 00:25:21.422806   13048 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 00:25:21.422834   13048 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 00:25:21.568449   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:25:21.596920   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:25:21.707949   13048 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:25:21.707971   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 00:25:21.714315   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 00:25:21.714333   13048 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 00:25:21.719616   13048 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 00:25:21.719638   13048 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 00:25:21.735036   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 00:25:21.735063   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 00:25:21.938630   13048 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:25:21.938653   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 00:25:21.986398   13048 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 00:25:21.986420   13048 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 00:25:22.001496   13048 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 00:25:22.001514   13048 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 00:25:22.025502   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:25:22.132980   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:25:22.313768   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 00:25:22.313790   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 00:25:22.313975   13048 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 00:25:22.313992   13048 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 00:25:22.617590   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 00:25:22.617628   13048 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 00:25:22.859803   13048 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 00:25:22.859839   13048 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 00:25:22.919923   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 00:25:22.919951   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 00:25:23.093162   13048 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 00:25:23.093190   13048 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 00:25:23.184769   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 00:25:23.184799   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 00:25:23.374589   13048 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:25:23.374617   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 00:25:23.395197   13048 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:25:23.395221   13048 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 00:25:23.653547   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:25:23.724231   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:25:24.783165   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.848150661s)
	I0717 00:25:24.783212   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:24.783224   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:24.783477   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:24.783540   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:24.783554   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:24.783564   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:24.783560   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:24.783825   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:24.783842   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:24.783840   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:27.462197   13048 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 00:25:27.462238   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:27.465555   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:27.466058   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:27.466079   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:27.466254   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:27.466493   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:27.466662   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:27.466829   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:27.764532   13048 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 00:25:27.869507   13048 addons.go:234] Setting addon gcp-auth=true in "addons-384227"
	I0717 00:25:27.869565   13048 host.go:66] Checking if "addons-384227" exists ...
	I0717 00:25:27.869954   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:27.869987   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:27.899063   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34837
	I0717 00:25:27.899495   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:27.899964   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:27.899980   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:27.900309   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:27.900917   13048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:25:27.900954   13048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:25:27.916188   13048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38763
	I0717 00:25:27.916565   13048 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:25:27.917017   13048 main.go:141] libmachine: Using API Version  1
	I0717 00:25:27.917042   13048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:25:27.917357   13048 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:25:27.917535   13048 main.go:141] libmachine: (addons-384227) Calling .GetState
	I0717 00:25:27.919148   13048 main.go:141] libmachine: (addons-384227) Calling .DriverName
	I0717 00:25:27.919371   13048 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 00:25:27.919398   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHHostname
	I0717 00:25:27.922169   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:27.922542   13048 main.go:141] libmachine: (addons-384227) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:64:cd", ip: ""} in network mk-addons-384227: {Iface:virbr1 ExpiryTime:2024-07-17 01:24:41 +0000 UTC Type:0 Mac:52:54:00:88:64:cd Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-384227 Clientid:01:52:54:00:88:64:cd}
	I0717 00:25:27.922579   13048 main.go:141] libmachine: (addons-384227) DBG | domain addons-384227 has defined IP address 192.168.39.177 and MAC address 52:54:00:88:64:cd in network mk-addons-384227
	I0717 00:25:27.922732   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHPort
	I0717 00:25:27.922929   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHKeyPath
	I0717 00:25:27.923107   13048 main.go:141] libmachine: (addons-384227) Calling .GetSSHUsername
	I0717 00:25:27.923245   13048 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/addons-384227/id_rsa Username:docker}
	I0717 00:25:28.988944   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.032496463s)
	I0717 00:25:28.988972   13048 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.030301382s)
	I0717 00:25:28.989008   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989021   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989055   13048 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.030166847s)
	I0717 00:25:28.989082   13048 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 00:25:28.989123   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.023101578s)
	I0717 00:25:28.989187   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989210   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989232   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.997243468s)
	I0717 00:25:28.989266   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989283   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989317   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.989332   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.989352   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.989366   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989375   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.951120705s)
	I0717 00:25:28.989189   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.003144332s)
	I0717 00:25:28.989389   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989416   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989395   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989459   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989520   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.64598383s)
	I0717 00:25:28.989539   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989546   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989561   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.989377   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989677   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.640598022s)
	I0717 00:25:28.989696   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989703   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989783   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.421307521s)
	I0717 00:25:28.989800   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989808   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989854   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.392908157s)
	I0717 00:25:28.989866   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989873   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989933   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.96440525s)
	I0717 00:25:28.989945   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.989953   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.989951   13048 node_ready.go:35] waiting up to 6m0s for node "addons-384227" to be "Ready" ...
	I0717 00:25:28.990070   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.990088   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.990090   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.990103   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.990112   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.990086   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.857072963s)
	I0717 00:25:28.990133   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.990145   13048 main.go:141] libmachine: Successfully made call to close driver server
	W0717 00:25:28.990148   13048 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:25:28.990155   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.990155   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.990164   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.990164   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.990165   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.336582633s)
	I0717 00:25:28.990172   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.990174   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.990181   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.990185   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.990194   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.990223   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.990112   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.990231   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.990237   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.990239   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.990270   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.990120   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.990298   13048 addons.go:475] Verifying addon ingress=true in "addons-384227"
	I0717 00:25:28.990168   13048 retry.go:31] will retry after 284.00132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:25:28.991208   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991209   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991234   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991245   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.991253   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.991255   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991260   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.991264   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.991274   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.991282   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.991316   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991337   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991341   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991347   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991355   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.991358   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991364   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.991365   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.991372   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.991373   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.991380   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.991414   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991424   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.991567   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.991588   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.991595   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.993767   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.993774   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.993779   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.993793   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.993814   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.993821   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.993828   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.993835   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.993906   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.993913   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.994613   13048 out.go:177] * Verifying ingress addon...
	I0717 00:25:28.995007   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.995033   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995040   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995048   13048 addons.go:475] Verifying addon metrics-server=true in "addons-384227"
	I0717 00:25:28.995086   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.995106   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995113   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995119   13048 addons.go:475] Verifying addon registry=true in "addons-384227"
	I0717 00:25:28.995355   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.995383   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995391   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995592   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.995596   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995650   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995669   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:28.995685   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:28.995615   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995743   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995654   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.995804   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.995634   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:28.996488   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:28.996501   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:28.996812   13048 out.go:177] * Verifying registry addon...
	I0717 00:25:28.997475   13048 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 00:25:28.997861   13048 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-384227 service yakd-dashboard -n yakd-dashboard
	
	I0717 00:25:28.999391   13048 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 00:25:28.999995   13048 node_ready.go:49] node "addons-384227" has status "Ready":"True"
	I0717 00:25:29.000016   13048 node_ready.go:38] duration metric: took 10.05001ms for node "addons-384227" to be "Ready" ...
	I0717 00:25:29.000028   13048 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:25:29.051219   13048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bpp2w" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.051907   13048 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:25:29.051932   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:29.052372   13048 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 00:25:29.052388   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:29.060454   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:29.060471   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:29.060767   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:29.060784   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 00:25:29.060862   13048 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 00:25:29.063035   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:29.063053   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:29.063356   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:29.063382   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:29.063385   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:29.064591   13048 pod_ready.go:92] pod "coredns-7db6d8ff4d-bpp2w" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.064610   13048 pod_ready.go:81] duration metric: took 13.364416ms for pod "coredns-7db6d8ff4d-bpp2w" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.064635   13048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fh4r2" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.094696   13048 pod_ready.go:92] pod "coredns-7db6d8ff4d-fh4r2" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.094725   13048 pod_ready.go:81] duration metric: took 30.081212ms for pod "coredns-7db6d8ff4d-fh4r2" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.094740   13048 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.132079   13048 pod_ready.go:92] pod "etcd-addons-384227" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.132100   13048 pod_ready.go:81] duration metric: took 37.35365ms for pod "etcd-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.132111   13048 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.170006   13048 pod_ready.go:92] pod "kube-apiserver-addons-384227" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.170033   13048 pod_ready.go:81] duration metric: took 37.915847ms for pod "kube-apiserver-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.170047   13048 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.276206   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:25:29.393627   13048 pod_ready.go:92] pod "kube-controller-manager-addons-384227" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.393658   13048 pod_ready.go:81] duration metric: took 223.602495ms for pod "kube-controller-manager-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.393678   13048 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9j492" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.494299   13048 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-384227" context rescaled to 1 replicas
	I0717 00:25:29.505855   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:29.515997   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:29.794162   13048 pod_ready.go:92] pod "kube-proxy-9j492" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:29.794191   13048 pod_ready.go:81] duration metric: took 400.504239ms for pod "kube-proxy-9j492" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:29.794204   13048 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:30.041554   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:30.041647   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:30.097596   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.37331077s)
	I0717 00:25:30.097650   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:30.097667   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:30.097686   13048 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.17829151s)
	I0717 00:25:30.098045   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:30.098082   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:30.098090   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:30.098104   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:30.098113   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:30.099783   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:30.099785   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:30.099811   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:30.099821   13048 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-384227"
	I0717 00:25:30.099964   13048 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:25:30.100982   13048 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 00:25:30.102577   13048 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 00:25:30.103269   13048 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 00:25:30.103968   13048 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 00:25:30.103989   13048 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 00:25:30.120474   13048 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:25:30.120503   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:30.194198   13048 pod_ready.go:92] pod "kube-scheduler-addons-384227" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:30.194223   13048 pod_ready.go:81] duration metric: took 400.011648ms for pod "kube-scheduler-addons-384227" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:30.194238   13048 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:30.248705   13048 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 00:25:30.248732   13048 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 00:25:30.385027   13048 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:25:30.385053   13048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 00:25:30.456954   13048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:25:30.512041   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:30.512688   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:30.608626   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:31.000932   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:31.004739   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:31.108452   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:31.138568   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.862300475s)
	I0717 00:25:31.138624   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:31.138636   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:31.138908   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:31.138935   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:31.138943   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:31.138949   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:31.138951   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:31.139302   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:31.139315   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:31.502820   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:31.504577   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:31.617772   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:31.803119   13048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.34612339s)
	I0717 00:25:31.803182   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:31.803200   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:31.803446   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:31.803494   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:31.803508   13048 main.go:141] libmachine: Making call to close driver server
	I0717 00:25:31.803518   13048 main.go:141] libmachine: (addons-384227) Calling .Close
	I0717 00:25:31.803527   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:31.803743   13048 main.go:141] libmachine: (addons-384227) DBG | Closing plugin on server side
	I0717 00:25:31.803786   13048 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:25:31.803808   13048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:25:31.805691   13048 addons.go:475] Verifying addon gcp-auth=true in "addons-384227"
	I0717 00:25:31.807348   13048 out.go:177] * Verifying gcp-auth addon...
	I0717 00:25:31.809582   13048 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 00:25:31.837497   13048 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 00:25:31.837516   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:32.008940   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:32.010127   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:32.109228   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:32.200194   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:32.313045   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:32.501784   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:32.505532   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:32.610314   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:32.815001   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:33.003200   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:33.005730   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:33.110205   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:33.313858   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:33.501681   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:33.506188   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:33.608669   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:33.814351   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:34.002783   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:34.004103   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:34.109156   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:34.200719   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:34.313972   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:34.501605   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:34.504187   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:34.608916   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:34.813171   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:35.003147   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:35.005125   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:35.109055   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:35.313021   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:35.501852   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:35.504604   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:35.607790   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:35.814267   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:36.002482   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:36.004306   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:36.118271   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:36.202598   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:36.313740   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:36.501132   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:36.503581   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:36.610151   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:36.813282   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:37.002358   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:37.005136   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:37.109587   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:37.313923   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:37.501185   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:37.503295   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:37.608815   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:37.812925   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:38.001971   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:38.004461   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:38.109466   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:38.312852   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:38.501638   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:38.504052   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:38.608969   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:38.699323   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:38.814471   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:39.002477   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:39.003926   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:39.108410   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:39.313455   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:39.502266   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:39.503860   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:39.608032   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:39.812907   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:40.001962   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:40.004388   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:40.108817   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:40.313038   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:40.504355   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:40.504775   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:40.609689   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:40.700617   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:40.813553   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:41.001160   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:41.003768   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:41.108422   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:41.313382   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:41.502493   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:41.503970   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:41.608348   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:41.814019   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:42.003073   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:42.004978   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:42.108501   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:42.313690   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:42.505409   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:42.505771   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:42.609367   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:42.700801   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:42.812864   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:43.002183   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:43.005524   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:43.109692   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:43.312922   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:43.502686   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:43.505927   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:43.608870   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:43.813326   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:44.002224   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:44.004929   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:44.108226   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:44.314175   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:44.502050   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:44.509161   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:44.608670   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:44.813340   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:45.006168   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:45.006403   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:45.109087   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:45.200394   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:45.313038   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:45.502183   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:45.507604   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:45.609327   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:45.813300   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:46.002802   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:46.004805   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:46.110225   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:46.314186   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:46.501904   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:46.504141   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:46.609168   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:46.812662   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:47.001627   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:47.003747   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:47.108343   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:47.313278   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:47.503622   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:47.505716   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:47.608644   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:47.700182   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:47.813305   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:48.002776   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:48.004477   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:48.108882   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:48.313667   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:48.502012   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:48.505717   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:48.610370   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:48.812942   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:49.002813   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:49.004549   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:49.109276   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:49.313104   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:49.501896   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:49.505601   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:49.608122   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:49.815889   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:50.003886   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:50.004670   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:50.108344   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:50.203941   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:50.313323   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:50.503953   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:50.504298   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:50.608673   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:50.813428   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:51.246246   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:51.247551   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:51.251370   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:51.314173   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:51.501983   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:51.504263   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:51.611129   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:51.813447   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:52.002406   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:52.003514   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:52.114192   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:52.204796   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:52.313062   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:52.501952   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:52.504107   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:52.608634   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:52.813531   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:53.002926   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:53.004322   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:53.109233   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:53.313696   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:53.501053   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:53.503540   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:53.609184   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:53.813071   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:54.004065   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:54.004456   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:54.109025   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:54.312930   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:54.501501   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:54.510252   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:54.608363   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:54.699965   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:54.813010   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:55.002530   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:55.004481   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:55.109074   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:55.313657   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:55.503830   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:55.504802   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:55.608729   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:55.814089   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:56.000988   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:56.003333   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:56.108809   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:56.313153   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:56.502121   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:56.503269   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:56.608630   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:56.700972   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:56.814245   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:57.002815   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:57.004023   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:57.108669   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:57.313417   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:57.507676   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:57.516459   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:57.608760   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:57.813180   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:58.002576   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:58.005051   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:58.109186   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:58.313772   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:58.501578   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:58.505381   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:58.609777   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:58.813306   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:59.002578   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:59.004122   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:59.108793   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:59.200386   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:59.316082   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:25:59.566538   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:59.571682   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:59.609940   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:59.812589   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:00.003302   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:00.005269   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:00.108698   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:00.312778   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:00.504739   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:00.504969   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:00.608229   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:00.813076   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:01.002087   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:01.004620   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:01.108918   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:01.200590   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:01.313568   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:01.501913   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:01.503393   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:01.608463   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:01.813325   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:02.002271   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:02.003179   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:02.108688   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:02.313094   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:02.502168   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:02.503706   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:02.611320   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:02.814656   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:03.001686   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:03.004270   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:03.110028   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:03.314129   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:03.501689   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:03.508277   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:03.609007   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:03.699021   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:03.812983   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:04.001624   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:04.005136   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:04.108305   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:04.312640   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:04.501493   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:04.503947   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:04.608487   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:04.813900   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:05.001798   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:05.004766   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:05.109265   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:05.313536   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:05.502406   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:05.504062   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:05.609328   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:05.700305   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:05.813665   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:06.480764   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:06.486896   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:06.490689   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:06.494591   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:06.508147   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:06.509693   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:06.608471   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:06.812639   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:07.001791   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:07.004158   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:07.114659   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:07.313811   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:07.505106   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:07.512884   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:07.608489   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:07.702796   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:07.813642   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:08.001820   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:08.003913   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:08.108580   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:08.313596   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:08.501625   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:08.503904   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:08.611496   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:08.812754   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:09.001448   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:09.003855   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:09.108796   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:09.313826   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:09.501418   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:09.503580   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:09.608020   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:09.813189   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:10.002288   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:10.004841   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:10.108276   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:10.200259   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:10.313199   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:10.505186   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:10.505970   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:10.609423   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:10.813692   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:11.002361   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:11.004790   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:11.108849   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:11.313116   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:11.501822   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:11.504561   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:11.608682   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:11.812941   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:12.002246   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:12.003932   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:12.108867   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:12.314111   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:12.505233   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:12.511722   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:12.609768   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:12.700860   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:12.813954   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:13.001339   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:13.006450   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:13.109870   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:13.313355   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:13.502034   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:13.503895   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:13.608455   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:13.813061   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:14.005056   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:14.006006   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:14.110613   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:14.313519   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:14.502406   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:14.504382   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:14.609789   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:14.703060   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:14.812887   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:15.001751   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:15.005890   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:15.110241   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:15.312693   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:15.514250   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:15.514378   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:15.609765   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:15.813230   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:16.002609   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:16.005026   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:16.108621   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:16.313074   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:16.504561   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:16.504803   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:16.609257   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:16.813211   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:17.005116   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:17.010955   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:17.109103   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:17.200017   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:17.317342   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:17.502423   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:17.503570   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:17.609098   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:17.813148   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:18.002479   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:18.005477   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:18.109412   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:18.312802   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:18.501321   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:18.503526   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:18.609593   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:18.813762   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:19.001731   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:19.006460   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:19.108496   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:19.200195   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:19.313286   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:19.504327   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:19.512233   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:19.608880   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:19.813038   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:20.001531   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:20.003968   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:20.108049   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:20.313795   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:20.501639   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:20.504038   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:20.608844   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:20.813978   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:21.003177   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:21.007310   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:21.109491   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:21.201027   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:21.313241   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:21.502791   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:21.504941   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:21.608445   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:21.813033   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:22.002297   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:22.008912   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:22.108244   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:22.314177   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:22.504970   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:22.508032   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:22.608580   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:22.812834   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:23.002241   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:23.004208   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:23.110541   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:23.313133   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:23.503336   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:23.505160   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:23.608780   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:23.700378   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:23.813897   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:24.002060   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:24.004192   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:24.109342   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:24.313016   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:24.501987   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:24.510648   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:24.608375   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:24.813955   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:25.002900   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:25.004620   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:25.110513   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:25.313393   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:25.502449   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:25.504918   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:25.608423   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:25.702362   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:25.813211   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:26.312430   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:26.315351   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:26.316419   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:26.316835   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:26.501528   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:26.506065   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:26.610536   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:26.814100   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:27.002289   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:27.005516   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:27.108714   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:27.313066   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:27.503349   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:27.506698   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:27.608376   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:27.813311   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:28.003591   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:28.005005   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:28.109461   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:28.200700   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:28.312809   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:28.501517   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:28.503584   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:28.611086   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:28.813085   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:29.001972   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:29.004434   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:29.109443   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:29.314498   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:29.503892   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:29.507133   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:29.608769   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:29.813530   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:30.001367   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:30.003853   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:30.108094   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:30.314044   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:30.501645   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:30.504394   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:30.609803   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:30.700713   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:30.813677   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:31.001434   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:31.003656   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:31.108075   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:31.313641   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:31.506589   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:31.506914   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:31.609311   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:31.813389   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:32.002378   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:32.003890   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:32.108625   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:32.313924   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:32.501602   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:32.505921   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:32.608833   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:32.813711   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:33.001662   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:33.005985   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:33.109075   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:33.201293   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:33.313974   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:33.503107   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:33.506545   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:33.611201   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:33.813450   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:34.007509   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:34.007567   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:34.109199   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:34.313334   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:34.504341   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:34.505228   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:34.609041   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:34.813333   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:35.002207   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:35.003818   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:35.109058   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:35.313340   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:35.502668   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:35.505712   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:35.612366   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:35.700516   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:35.813722   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:36.006104   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:36.007338   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:36.108608   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:36.313682   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:36.503737   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:36.503850   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:36.609017   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:36.813528   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:37.004095   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:37.004142   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:37.111519   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:37.313188   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:37.513000   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:37.513195   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:37.608196   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:37.813218   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:38.001803   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:38.004210   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:38.108764   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:38.201229   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:38.312844   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:38.502215   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:38.504315   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:38.609199   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:38.814223   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:39.002403   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:39.003946   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:39.108234   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:39.312839   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:39.503172   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:39.504571   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:39.608575   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:39.813350   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:40.002270   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:40.004322   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:40.110451   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:40.314078   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:40.501839   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:40.505320   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:40.608868   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:40.700711   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:40.813822   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:41.001638   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:41.003695   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:41.108403   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:41.313458   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:41.502477   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:41.504193   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:41.618244   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:41.814097   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:42.004508   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:42.008093   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:42.108500   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:42.313485   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:42.510729   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:42.511417   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:42.608634   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:42.813661   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:43.001650   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:43.004172   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:43.109484   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:43.203787   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:43.314198   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:43.502078   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:43.506303   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:43.612456   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:43.814437   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:44.004044   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:44.008262   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:44.108736   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:44.314290   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:44.506631   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:44.507571   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:44.612587   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:44.813174   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:45.002588   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:45.004175   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:45.108719   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:45.317832   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:45.506025   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:45.511194   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:45.609231   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:45.701919   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:45.813213   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:46.002084   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:46.005373   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:46.108872   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:46.313626   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:46.502121   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:46.505180   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:46.609724   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:46.813536   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:47.003133   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:47.005597   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:47.109517   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:47.314320   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:47.501494   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:47.506460   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:47.608722   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:47.813034   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:48.002197   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:48.004469   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:48.109289   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:48.199668   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:48.314220   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:48.502934   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:48.508397   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:48.609133   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:48.813751   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:49.001514   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:49.003849   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:49.110866   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:49.313711   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:49.503817   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:49.505285   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:49.608294   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:49.813433   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:50.002848   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:50.005698   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:50.109218   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:50.209287   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:50.314096   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:50.506660   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:50.510577   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:50.610505   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:50.813712   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:51.001467   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:51.004258   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:51.108516   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:51.313136   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:51.501903   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:51.506779   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:51.934186   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:51.935173   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:52.003729   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:52.004824   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:52.108498   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:52.313415   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:52.504145   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:52.507819   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:52.608318   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:52.702044   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:52.813166   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:53.005680   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:53.008381   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:53.108800   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:53.317041   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:53.502997   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:53.509693   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:53.610165   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:53.813641   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:54.360658   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:54.369252   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:54.369475   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:54.369876   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:54.503317   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:54.503834   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:54.608287   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:54.813121   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:55.002794   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:55.004351   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:55.108678   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:55.201054   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:55.313060   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:55.507217   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:55.508311   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:55.609076   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:55.812854   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:56.001385   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:56.004093   13048 kapi.go:107] duration metric: took 1m27.004700692s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 00:26:56.109635   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:56.313446   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:56.503735   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:56.608943   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:56.813734   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:57.001220   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:57.109659   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:57.313425   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:57.502838   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:57.607795   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:57.701276   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:57.813835   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:58.001869   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:58.109519   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:58.312454   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:58.503846   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:58.609273   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:58.813351   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:59.004392   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:59.108498   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:59.313719   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:26:59.502022   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:59.619427   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:59.706189   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:26:59.813423   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:00.002409   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:00.111164   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:00.313425   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:00.839715   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:00.845379   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:00.845940   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:01.001909   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:01.110484   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:01.313210   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:01.501750   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:01.610055   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:01.816208   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:02.001293   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:02.109558   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:02.204455   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:02.313740   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:02.503414   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:02.608734   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:02.815104   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:03.001898   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:03.108680   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:03.313715   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:03.502106   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:03.608460   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:03.813452   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:04.002128   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:04.108270   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:04.314837   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:04.501369   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:04.608779   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:04.701921   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:04.812908   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:05.001788   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:05.109219   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:05.313418   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:05.560811   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:05.611969   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:05.812872   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:06.005048   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:06.113797   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:06.317145   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:06.504362   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:06.610069   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:06.813145   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:07.001810   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:07.108641   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:07.199796   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:07.312999   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:07.502515   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:07.613598   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:07.813266   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:08.006976   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:08.109557   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:08.313266   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:08.506313   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:08.608517   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:08.814871   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:09.001842   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:09.109239   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:09.202464   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:09.313358   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:09.502477   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:09.609588   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:09.813059   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:10.001968   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:10.108720   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:10.314103   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:10.504059   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:10.615646   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:10.813464   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:11.242433   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:11.246207   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:11.247362   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:11.313429   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:11.502794   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:11.613386   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:11.813804   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:12.001733   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:12.108317   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:12.316827   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:12.507232   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:12.612314   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:12.813470   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:13.002794   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:13.109616   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:13.313114   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:13.507474   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:13.609350   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:13.700103   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:13.813567   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:14.003135   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:14.108966   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:14.312921   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:14.502136   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:14.609119   13048 kapi.go:107] duration metric: took 1m44.505846689s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 00:27:14.813517   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:15.003127   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:15.314726   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:15.505452   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:15.700183   13048 pod_ready.go:102] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"False"
	I0717 00:27:15.813841   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:16.001311   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:16.313305   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:16.503273   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:16.702912   13048 pod_ready.go:92] pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace has status "Ready":"True"
	I0717 00:27:16.702935   13048 pod_ready.go:81] duration metric: took 1m46.508690063s for pod "metrics-server-c59844bb4-ptnnk" in "kube-system" namespace to be "Ready" ...
	I0717 00:27:16.702945   13048 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-v6tmh" in "kube-system" namespace to be "Ready" ...
	I0717 00:27:16.718913   13048 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-v6tmh" in "kube-system" namespace has status "Ready":"True"
	I0717 00:27:16.718954   13048 pod_ready.go:81] duration metric: took 16.001721ms for pod "nvidia-device-plugin-daemonset-v6tmh" in "kube-system" namespace to be "Ready" ...
	I0717 00:27:16.718982   13048 pod_ready.go:38] duration metric: took 1m47.718938458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:27:16.719001   13048 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:27:16.719034   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:27:16.719095   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:27:16.808144   13048 cri.go:89] found id: "da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:16.808170   13048 cri.go:89] found id: ""
	I0717 00:27:16.808178   13048 logs.go:276] 1 containers: [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7]
	I0717 00:27:16.808233   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:16.812888   13048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:27:16.812940   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:27:16.817065   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:16.865201   13048 cri.go:89] found id: "b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:16.865223   13048 cri.go:89] found id: ""
	I0717 00:27:16.865231   13048 logs.go:276] 1 containers: [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791]
	I0717 00:27:16.865274   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:16.869768   13048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:27:16.869818   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:27:16.911800   13048 cri.go:89] found id: "0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:16.911819   13048 cri.go:89] found id: ""
	I0717 00:27:16.911825   13048 logs.go:276] 1 containers: [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e]
	I0717 00:27:16.911865   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:16.915970   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:27:16.916029   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:27:16.982747   13048 cri.go:89] found id: "69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:16.982771   13048 cri.go:89] found id: ""
	I0717 00:27:16.982780   13048 logs.go:276] 1 containers: [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3]
	I0717 00:27:16.982828   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:16.987111   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:27:16.987172   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:27:17.001798   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:17.035343   13048 cri.go:89] found id: "a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:17.035367   13048 cri.go:89] found id: ""
	I0717 00:27:17.035376   13048 logs.go:276] 1 containers: [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf]
	I0717 00:27:17.035420   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:17.049403   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:27:17.049469   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:27:17.093275   13048 cri.go:89] found id: "229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:17.093296   13048 cri.go:89] found id: ""
	I0717 00:27:17.093305   13048 logs.go:276] 1 containers: [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f]
	I0717 00:27:17.093361   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:17.097446   13048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:27:17.097498   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:27:17.143956   13048 cri.go:89] found id: ""
	I0717 00:27:17.143982   13048 logs.go:276] 0 containers: []
	W0717 00:27:17.143996   13048 logs.go:278] No container was found matching "kindnet"
	I0717 00:27:17.144004   13048 logs.go:123] Gathering logs for kube-scheduler [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3] ...
	I0717 00:27:17.144017   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:17.189135   13048 logs.go:123] Gathering logs for kube-controller-manager [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f] ...
	I0717 00:27:17.189162   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:17.246761   13048 logs.go:123] Gathering logs for container status ...
	I0717 00:27:17.246799   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:27:17.307150   13048 logs.go:123] Gathering logs for kubelet ...
	I0717 00:27:17.307178   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:27:17.313502   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:17.389166   13048 logs.go:123] Gathering logs for dmesg ...
	I0717 00:27:17.389198   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:27:17.404438   13048 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:27:17.404463   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:27:17.504053   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:17.538326   13048 logs.go:123] Gathering logs for kube-apiserver [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7] ...
	I0717 00:27:17.538352   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:17.587390   13048 logs.go:123] Gathering logs for coredns [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e] ...
	I0717 00:27:17.587415   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:17.624869   13048 logs.go:123] Gathering logs for etcd [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791] ...
	I0717 00:27:17.624899   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:17.684476   13048 logs.go:123] Gathering logs for kube-proxy [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf] ...
	I0717 00:27:17.684511   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:17.724136   13048 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:27:17.724165   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:27:17.814045   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:18.002187   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:18.314741   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:18.507329   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:18.814205   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:19.002459   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:19.313337   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:19.502194   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:19.813731   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:20.001740   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:20.313604   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:20.501412   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:20.728241   13048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:27:20.748426   13048 api_server.go:72] duration metric: took 2m0.395955489s to wait for apiserver process to appear ...
	I0717 00:27:20.748452   13048 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:27:20.748478   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:27:20.748525   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:27:20.790496   13048 cri.go:89] found id: "da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:20.790519   13048 cri.go:89] found id: ""
	I0717 00:27:20.790526   13048 logs.go:276] 1 containers: [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7]
	I0717 00:27:20.790590   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:20.796402   13048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:27:20.796469   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:27:20.813630   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:20.841409   13048 cri.go:89] found id: "b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:20.841436   13048 cri.go:89] found id: ""
	I0717 00:27:20.841445   13048 logs.go:276] 1 containers: [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791]
	I0717 00:27:20.841498   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:20.845709   13048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:27:20.845760   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:27:20.895540   13048 cri.go:89] found id: "0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:20.895569   13048 cri.go:89] found id: ""
	I0717 00:27:20.895578   13048 logs.go:276] 1 containers: [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e]
	I0717 00:27:20.895632   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:20.899816   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:27:20.899881   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:27:20.941300   13048 cri.go:89] found id: "69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:20.941328   13048 cri.go:89] found id: ""
	I0717 00:27:20.941336   13048 logs.go:276] 1 containers: [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3]
	I0717 00:27:20.941386   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:20.947312   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:27:20.947361   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:27:20.989117   13048 cri.go:89] found id: "a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:20.989136   13048 cri.go:89] found id: ""
	I0717 00:27:20.989143   13048 logs.go:276] 1 containers: [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf]
	I0717 00:27:20.989192   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:20.993916   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:27:20.993969   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:27:21.003383   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:21.032585   13048 cri.go:89] found id: "229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:21.032604   13048 cri.go:89] found id: ""
	I0717 00:27:21.032611   13048 logs.go:276] 1 containers: [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f]
	I0717 00:27:21.032665   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:21.036603   13048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:27:21.036673   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:27:21.088582   13048 cri.go:89] found id: ""
	I0717 00:27:21.088608   13048 logs.go:276] 0 containers: []
	W0717 00:27:21.088616   13048 logs.go:278] No container was found matching "kindnet"
	I0717 00:27:21.088624   13048 logs.go:123] Gathering logs for dmesg ...
	I0717 00:27:21.088635   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:27:21.104040   13048 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:27:21.104072   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:27:21.227134   13048 logs.go:123] Gathering logs for etcd [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791] ...
	I0717 00:27:21.227159   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:21.282641   13048 logs.go:123] Gathering logs for kube-controller-manager [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f] ...
	I0717 00:27:21.282672   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:21.315375   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:21.363085   13048 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:27:21.363120   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:27:21.502082   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:21.814045   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:22.003770   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:22.130168   13048 logs.go:123] Gathering logs for container status ...
	I0717 00:27:22.130205   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:27:22.181487   13048 logs.go:123] Gathering logs for kubelet ...
	I0717 00:27:22.181514   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:27:22.268749   13048 logs.go:123] Gathering logs for kube-apiserver [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7] ...
	I0717 00:27:22.268798   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:22.314131   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:22.318512   13048 logs.go:123] Gathering logs for coredns [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e] ...
	I0717 00:27:22.318557   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:22.360534   13048 logs.go:123] Gathering logs for kube-scheduler [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3] ...
	I0717 00:27:22.360562   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:22.409369   13048 logs.go:123] Gathering logs for kube-proxy [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf] ...
	I0717 00:27:22.409404   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:22.503731   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:22.813904   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:23.003350   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:23.314169   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:23.503278   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:23.813397   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:24.003358   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:24.313632   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:24.503073   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:24.812756   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:24.946681   13048 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I0717 00:27:24.951380   13048 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I0717 00:27:24.952227   13048 api_server.go:141] control plane version: v1.30.2
	I0717 00:27:24.952248   13048 api_server.go:131] duration metric: took 4.203791958s to wait for apiserver health ...
	I0717 00:27:24.952255   13048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:27:24.952274   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:27:24.952314   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:27:24.990327   13048 cri.go:89] found id: "da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:24.990347   13048 cri.go:89] found id: ""
	I0717 00:27:24.990356   13048 logs.go:276] 1 containers: [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7]
	I0717 00:27:24.990415   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:24.994421   13048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:27:24.994477   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:27:25.005545   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:25.051652   13048 cri.go:89] found id: "b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:25.051685   13048 cri.go:89] found id: ""
	I0717 00:27:25.051695   13048 logs.go:276] 1 containers: [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791]
	I0717 00:27:25.051752   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:25.056135   13048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:27:25.056198   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:27:25.104485   13048 cri.go:89] found id: "0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:25.104512   13048 cri.go:89] found id: ""
	I0717 00:27:25.104534   13048 logs.go:276] 1 containers: [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e]
	I0717 00:27:25.104590   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:25.108935   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:27:25.109007   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:27:25.154746   13048 cri.go:89] found id: "69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:25.154766   13048 cri.go:89] found id: ""
	I0717 00:27:25.154775   13048 logs.go:276] 1 containers: [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3]
	I0717 00:27:25.154829   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:25.159320   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:27:25.159370   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:27:25.198181   13048 cri.go:89] found id: "a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:25.198210   13048 cri.go:89] found id: ""
	I0717 00:27:25.198218   13048 logs.go:276] 1 containers: [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf]
	I0717 00:27:25.198266   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:25.202773   13048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:27:25.202840   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:27:25.254005   13048 cri.go:89] found id: "229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:25.254030   13048 cri.go:89] found id: ""
	I0717 00:27:25.254039   13048 logs.go:276] 1 containers: [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f]
	I0717 00:27:25.254095   13048 ssh_runner.go:195] Run: which crictl
	I0717 00:27:25.258436   13048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:27:25.258496   13048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:27:25.297168   13048 cri.go:89] found id: ""
	I0717 00:27:25.297195   13048 logs.go:276] 0 containers: []
	W0717 00:27:25.297203   13048 logs.go:278] No container was found matching "kindnet"
	I0717 00:27:25.297211   13048 logs.go:123] Gathering logs for kubelet ...
	I0717 00:27:25.297221   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 00:27:25.313798   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:25.382471   13048 logs.go:123] Gathering logs for kube-apiserver [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7] ...
	I0717 00:27:25.382506   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7"
	I0717 00:27:25.433245   13048 logs.go:123] Gathering logs for kube-scheduler [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3] ...
	I0717 00:27:25.433274   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3"
	I0717 00:27:25.490753   13048 logs.go:123] Gathering logs for container status ...
	I0717 00:27:25.490786   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:27:25.503230   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:25.549680   13048 logs.go:123] Gathering logs for dmesg ...
	I0717 00:27:25.549709   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:27:25.564895   13048 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:27:25.564926   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:27:25.681789   13048 logs.go:123] Gathering logs for etcd [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791] ...
	I0717 00:27:25.681834   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791"
	I0717 00:27:25.738851   13048 logs.go:123] Gathering logs for coredns [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e] ...
	I0717 00:27:25.738888   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e"
	I0717 00:27:25.781125   13048 logs.go:123] Gathering logs for kube-proxy [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf] ...
	I0717 00:27:25.781159   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf"
	I0717 00:27:25.813835   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:25.823555   13048 logs.go:123] Gathering logs for kube-controller-manager [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f] ...
	I0717 00:27:25.823577   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f"
	I0717 00:27:25.890737   13048 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:27:25.890770   13048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:27:26.003708   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:26.313918   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:26.510458   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:26.812914   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:27.001918   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:27.313800   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:27.502420   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:27.813354   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:28.002092   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:28.313131   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:28.502420   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:28.813383   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:29.002699   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:29.252761   13048 system_pods.go:59] 18 kube-system pods found
	I0717 00:27:29.252791   13048 system_pods.go:61] "coredns-7db6d8ff4d-bpp2w" [0d4f8b36-6961-478d-bbe7-5aded14a13ea] Running
	I0717 00:27:29.252795   13048 system_pods.go:61] "csi-hostpath-attacher-0" [8956758d-a3be-46a1-82c1-768f90c29424] Running
	I0717 00:27:29.252798   13048 system_pods.go:61] "csi-hostpath-resizer-0" [b89df6f6-48a1-4afd-b393-33932498e6e7] Running
	I0717 00:27:29.252801   13048 system_pods.go:61] "csi-hostpathplugin-96mlp" [8f0f8500-9872-4d20-9442-c719eae3b46b] Running
	I0717 00:27:29.252806   13048 system_pods.go:61] "etcd-addons-384227" [7803d027-ae67-4808-90d9-34d25a1f869b] Running
	I0717 00:27:29.252809   13048 system_pods.go:61] "kube-apiserver-addons-384227" [c8f18b31-b600-4d33-a43d-bf96e700fbda] Running
	I0717 00:27:29.252812   13048 system_pods.go:61] "kube-controller-manager-addons-384227" [f13bcf7c-6e34-4d97-97a6-90958791cb01] Running
	I0717 00:27:29.252816   13048 system_pods.go:61] "kube-ingress-dns-minikube" [959e53f2-7e3f-452f-b7ce-9f9134926b56] Running
	I0717 00:27:29.252818   13048 system_pods.go:61] "kube-proxy-9j492" [74949344-2223-4f8d-bc35-737de5d7f6e9] Running
	I0717 00:27:29.252821   13048 system_pods.go:61] "kube-scheduler-addons-384227" [13d1c064-225b-41db-bbbf-8e140311aaf0] Running
	I0717 00:27:29.252825   13048 system_pods.go:61] "metrics-server-c59844bb4-ptnnk" [3c732a54-ac1f-4d2b-8090-29a97aac2ca5] Running
	I0717 00:27:29.252828   13048 system_pods.go:61] "nvidia-device-plugin-daemonset-v6tmh" [cbb5bf86-4332-4b45-b6cf-4c77245158ed] Running
	I0717 00:27:29.252830   13048 system_pods.go:61] "registry-proxy-n2f8j" [b4af5a32-5f55-4f42-8506-d84f33c037ee] Running
	I0717 00:27:29.252833   13048 system_pods.go:61] "registry-wjhgl" [3387114c-1fe0-4740-98da-750978da9284] Running
	I0717 00:27:29.252835   13048 system_pods.go:61] "snapshot-controller-745499f584-d8fzs" [789ce441-6886-4b58-a02d-299ab7eb6f17] Running
	I0717 00:27:29.252838   13048 system_pods.go:61] "snapshot-controller-745499f584-hz4l5" [d27abf24-4a54-4c80-a3ea-04e54e66e0cb] Running
	I0717 00:27:29.252840   13048 system_pods.go:61] "storage-provisioner" [076c6e29-09df-469d-ae38-fe3a33503a57] Running
	I0717 00:27:29.252843   13048 system_pods.go:61] "tiller-deploy-6677d64bcd-h842v" [39eb0880-886d-42e4-b134-ac0f48c445e8] Running
	I0717 00:27:29.252848   13048 system_pods.go:74] duration metric: took 4.300588879s to wait for pod list to return data ...
	I0717 00:27:29.252854   13048 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:27:29.255249   13048 default_sa.go:45] found service account: "default"
	I0717 00:27:29.255267   13048 default_sa.go:55] duration metric: took 2.407731ms for default service account to be created ...
	I0717 00:27:29.255275   13048 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:27:29.262808   13048 system_pods.go:86] 18 kube-system pods found
	I0717 00:27:29.262829   13048 system_pods.go:89] "coredns-7db6d8ff4d-bpp2w" [0d4f8b36-6961-478d-bbe7-5aded14a13ea] Running
	I0717 00:27:29.262834   13048 system_pods.go:89] "csi-hostpath-attacher-0" [8956758d-a3be-46a1-82c1-768f90c29424] Running
	I0717 00:27:29.262839   13048 system_pods.go:89] "csi-hostpath-resizer-0" [b89df6f6-48a1-4afd-b393-33932498e6e7] Running
	I0717 00:27:29.262842   13048 system_pods.go:89] "csi-hostpathplugin-96mlp" [8f0f8500-9872-4d20-9442-c719eae3b46b] Running
	I0717 00:27:29.262846   13048 system_pods.go:89] "etcd-addons-384227" [7803d027-ae67-4808-90d9-34d25a1f869b] Running
	I0717 00:27:29.262850   13048 system_pods.go:89] "kube-apiserver-addons-384227" [c8f18b31-b600-4d33-a43d-bf96e700fbda] Running
	I0717 00:27:29.262854   13048 system_pods.go:89] "kube-controller-manager-addons-384227" [f13bcf7c-6e34-4d97-97a6-90958791cb01] Running
	I0717 00:27:29.262858   13048 system_pods.go:89] "kube-ingress-dns-minikube" [959e53f2-7e3f-452f-b7ce-9f9134926b56] Running
	I0717 00:27:29.262862   13048 system_pods.go:89] "kube-proxy-9j492" [74949344-2223-4f8d-bc35-737de5d7f6e9] Running
	I0717 00:27:29.262865   13048 system_pods.go:89] "kube-scheduler-addons-384227" [13d1c064-225b-41db-bbbf-8e140311aaf0] Running
	I0717 00:27:29.262869   13048 system_pods.go:89] "metrics-server-c59844bb4-ptnnk" [3c732a54-ac1f-4d2b-8090-29a97aac2ca5] Running
	I0717 00:27:29.262875   13048 system_pods.go:89] "nvidia-device-plugin-daemonset-v6tmh" [cbb5bf86-4332-4b45-b6cf-4c77245158ed] Running
	I0717 00:27:29.262881   13048 system_pods.go:89] "registry-proxy-n2f8j" [b4af5a32-5f55-4f42-8506-d84f33c037ee] Running
	I0717 00:27:29.262886   13048 system_pods.go:89] "registry-wjhgl" [3387114c-1fe0-4740-98da-750978da9284] Running
	I0717 00:27:29.262890   13048 system_pods.go:89] "snapshot-controller-745499f584-d8fzs" [789ce441-6886-4b58-a02d-299ab7eb6f17] Running
	I0717 00:27:29.262894   13048 system_pods.go:89] "snapshot-controller-745499f584-hz4l5" [d27abf24-4a54-4c80-a3ea-04e54e66e0cb] Running
	I0717 00:27:29.262902   13048 system_pods.go:89] "storage-provisioner" [076c6e29-09df-469d-ae38-fe3a33503a57] Running
	I0717 00:27:29.262908   13048 system_pods.go:89] "tiller-deploy-6677d64bcd-h842v" [39eb0880-886d-42e4-b134-ac0f48c445e8] Running
	I0717 00:27:29.262914   13048 system_pods.go:126] duration metric: took 7.633601ms to wait for k8s-apps to be running ...
	I0717 00:27:29.262920   13048 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:27:29.262960   13048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:27:29.279262   13048 system_svc.go:56] duration metric: took 16.334722ms WaitForService to wait for kubelet
	I0717 00:27:29.279291   13048 kubeadm.go:582] duration metric: took 2m8.926823077s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:27:29.279319   13048 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:27:29.281909   13048 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:27:29.281934   13048 node_conditions.go:123] node cpu capacity is 2
	I0717 00:27:29.281946   13048 node_conditions.go:105] duration metric: took 2.621134ms to run NodePressure ...
	I0717 00:27:29.281956   13048 start.go:241] waiting for startup goroutines ...
	I0717 00:27:29.313504   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:29.504173   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:29.813157   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:30.002741   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:30.313584   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:30.502018   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:30.813135   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:31.002225   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:31.314033   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:31.501751   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:31.813776   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:32.007705   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:32.313327   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:32.505930   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:32.813806   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:33.004586   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:33.314080   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:33.502213   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:33.813846   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:34.002922   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:34.313186   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:34.505180   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:34.813708   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:35.002391   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:35.313037   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:35.501771   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:35.813780   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:36.002854   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:36.314078   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:36.502236   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:36.813509   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:37.002316   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:37.313338   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:37.502567   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:37.813895   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:38.002008   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:38.313618   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:38.501611   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:38.813649   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:39.002265   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:39.312364   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:39.505028   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:39.813558   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:40.002588   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:40.312831   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:40.503876   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:40.813732   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:41.001811   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:41.313941   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:41.503554   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:41.813677   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:42.002360   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:42.313296   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:42.504186   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:42.812955   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:43.002374   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:43.314242   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:43.503604   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:43.813361   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:44.002683   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:44.313042   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:44.508105   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:44.812922   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:45.001804   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:45.313811   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:45.501651   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:45.813604   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:46.004321   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:46.312561   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:46.504676   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:46.813309   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:47.001710   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:47.314867   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:47.501909   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:47.814508   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:48.004379   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:48.314270   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:48.502366   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:48.812694   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:49.003874   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:49.314431   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:49.502390   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:50.145862   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:50.147648   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:50.313691   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:50.504708   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:50.814126   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:51.002513   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:51.313854   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:51.501973   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:51.813613   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:52.003174   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:52.315534   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:52.509961   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:52.813659   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:53.002541   13048 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:53.313750   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:53.501967   13048 kapi.go:107] duration metric: took 2m24.504491423s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 00:27:53.813523   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:54.312746   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:54.812934   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:55.313484   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:55.813497   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:56.313517   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:56.813283   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:57.471258   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:57.813828   13048 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:27:58.313114   13048 kapi.go:107] duration metric: took 2m26.503531732s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 00:27:58.314824   13048 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-384227 cluster.
	I0717 00:27:58.316226   13048 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 00:27:58.317275   13048 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 00:27:58.318382   13048 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, helm-tiller, metrics-server, inspektor-gadget, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 00:27:58.319617   13048 addons.go:510] duration metric: took 2m37.967124214s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner helm-tiller metrics-server inspektor-gadget ingress-dns yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 00:27:58.319657   13048 start.go:246] waiting for cluster config update ...
	I0717 00:27:58.319689   13048 start.go:255] writing updated cluster config ...
	I0717 00:27:58.319950   13048 ssh_runner.go:195] Run: rm -f paused
	I0717 00:27:58.368668   13048 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:27:58.370598   13048 out.go:177] * Done! kubectl is now configured to use "addons-384227" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.591589636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176420591560710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a3ef7b0-dbdd-47d9-9aee-427a4a597d49 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.592423458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3fc54e2-6d8e-434b-a623-cd8e14e60d14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.592481190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3fc54e2-6d8e-434b-a623-cd8e14e60d14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.594176483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e72d83e9b5c9d82b9f6b3d597d215164e0beebd5b195cf258344c9306c869b26,PodSandboxId:79c01f50e1dd893e76ebb15ed5073bb62f5e021c6ed2c8f574d6629cf7446d6a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721176257409503441,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-7dd2l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83fa3d37-3105-471c-845f-7da9033760e7,},Annotations:map[string]string{io.kubernetes.container.hash: efd60ad4,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d04a55af0f41650383d558f5ead4f195d48bc0df1c6870b8e3d67fbdd8d7b2,PodSandboxId:2cee3fd0c167104c8b363d045beed2e397e7327f82428cf5917ee6176cc245cb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721176116599378585,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30ead786-c960-40a8-a321-2f7f774d10f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b8e0d1c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f231f824b96088f6892f30b23977945ec76b3f3f15ea1d13e33facbf2f421c,PodSandboxId:fa53b749e3d22bca71a68cac7665772627c0c77fac88f24097f8ae136bbcdbbd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721176097247719519,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-7xwpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f011f7f1-0d18-4279-850b-076dcfcd6908,},Annotations:map[string]string{io.kubernetes.container.hash: f82954c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94408e45cb997f29a2489c6e0dde799186ae8e7b7d275d919d91531c55b579da,PodSandboxId:0c8a3036510204fe1af0c73baae3ec9b0477d4b24b426484d16441c3f6748311,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721176077898809247,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q2bzh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0e27fbdd-f0bc-44e2-a3e0-097e976a4a65,},Annotations:map[string]string{io.kubernetes.container.hash: 98cdb06c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cb930070e7d6e1ea4867c649aff3173f1433ef390032352666a2e71a23fcd0,PodSandboxId:5e4fda911a7a2e9811c3048f6276cd2453d949c6a6111b4f3e35f58817e6a661,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:
CONTAINER_RUNNING,CreatedAt:1721175998653196144,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7sd2x,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb27b9e7-2f58-4ceb-b978-c36e88e06724,},Annotations:map[string]string{io.kubernetes.container.hash: 2db43ed4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a0c16d156ac4f26606d741824826ef9651d013f8060c0a4f76cc1ef0f65c42,PodSandboxId:a0cdb9f22c37433ab387051e61c0c691567aa6b6a684240ad8ccca8abedcedc7,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46
891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721175981221455797,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-5nswx,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f7985a97-d5e5-4554-b699-b8a01e187c7e,},Annotations:map[string]string{io.kubernetes.container.hash: f8d9ce34,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3,PodSandboxId:981a8872fa005df2f1556778461f2d9ef2b23c48d2d81373545aa7003fd35930,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e
588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721175974627515033,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-ptnnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c732a54-ac1f-4d2b-8090-29a97aac2ca5,},Annotations:map[string]string{io.kubernetes.container.hash: 73ba5e1e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f11103cc7df3635c1fbf9fb47b6f3a51eea6db7e78b2157fc67acfc2ea48721,PodSandboxId:e2fb26f681873456611c0c1b2a6f351494c6e0ea4d0e328c4d190f14bbce5b4d,Metadata:&ContainerMetadata{Name:storage-prov
isioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721175927146862191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076c6e29-09df-469d-ae38-fe3a33503a57,},Annotations:map[string]string{io.kubernetes.container.hash: a3725e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e,PodSandboxId:889706a15ed043bce6e674ae47b9231f01559edb63b50514c8f794160938c8fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175924618196892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bpp2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4f8b36-6961-478d-bbe7-5aded14a13ea,},Annotations:map[string]string{io.kubernetes.container.hash: d83c619d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96ca
0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf,PodSandboxId:fe5d0c7e3456871b3b06ac1d05703e21521e75395b4bead2abbee531ac8a0692,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175921450104483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9j492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74949344-2223-4f8d-bc35-737de5d7f6e9,},Annotations:map[string]string{io.kubernetes.container.hash: 82657c1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c0125279f370bf439392702153b56afdf605
5462e7799832bb02d0fa3a1bd3,PodSandboxId:0353bc35834e7a57cf74175cd9d37c05e9a4c04013e53c59aea31ce7c9323adb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175901941504017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d17ebeaf4ff849d0ec1464ca5a7ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1
981c9f1f,PodSandboxId:9082a8c6c36587af3f03297c0748e95e6d3c07d477cad43e2a9f9ca82017872a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175901917599561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf035bf52a69f65af123fbdf3e000bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9851ce86beb538ffba87aaa958016675a8b3437813511
310f2cba6836816791,PodSandboxId:8e8f4d736d14524ec9ddd52fc06006289e179ed7d9400d8e115959ce227654b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175901850939448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47b84062fa6b411a724bed1aa03732a,},Annotations:map[string]string{io.kubernetes.container.hash: 841c18dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7,PodSandboxId:db2bb303a90cfc3
38e7ad665f9eb2392af5f30024759d91d97703673f4e69975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175901815159042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed91ad1ae7f2691bb897d09b2220db50,},Annotations:map[string]string{io.kubernetes.container.hash: 34b6a3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3fc54e2-6d8e-434b-a623-cd8e14e60d14 name=/runtime.v1.RuntimeService/ListCo
ntainers
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.638883311Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c952358f-33c1-4f59-bb30-88898773a0b2 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.639049389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c952358f-33c1-4f59-bb30-88898773a0b2 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.640603033Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87254ff1-daf8-4857-b2e2-974af10ffb2c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.642604855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176420642572211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87254ff1-daf8-4857-b2e2-974af10ffb2c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.643282189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbe685c8-d6ae-4c90-b8a0-dc2a5151bb53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.643356945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbe685c8-d6ae-4c90-b8a0-dc2a5151bb53 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.643950133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e72d83e9b5c9d82b9f6b3d597d215164e0beebd5b195cf258344c9306c869b26,PodSandboxId:79c01f50e1dd893e76ebb15ed5073bb62f5e021c6ed2c8f574d6629cf7446d6a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721176257409503441,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-7dd2l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83fa3d37-3105-471c-845f-7da9033760e7,},Annotations:map[string]string{io.kubernetes.container.hash: efd60ad4,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d04a55af0f41650383d558f5ead4f195d48bc0df1c6870b8e3d67fbdd8d7b2,PodSandboxId:2cee3fd0c167104c8b363d045beed2e397e7327f82428cf5917ee6176cc245cb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721176116599378585,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30ead786-c960-40a8-a321-2f7f774d10f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b8e0d1c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f231f824b96088f6892f30b23977945ec76b3f3f15ea1d13e33facbf2f421c,PodSandboxId:fa53b749e3d22bca71a68cac7665772627c0c77fac88f24097f8ae136bbcdbbd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721176097247719519,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-7xwpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f011f7f1-0d18-4279-850b-076dcfcd6908,},Annotations:map[string]string{io.kubernetes.container.hash: f82954c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94408e45cb997f29a2489c6e0dde799186ae8e7b7d275d919d91531c55b579da,PodSandboxId:0c8a3036510204fe1af0c73baae3ec9b0477d4b24b426484d16441c3f6748311,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721176077898809247,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q2bzh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0e27fbdd-f0bc-44e2-a3e0-097e976a4a65,},Annotations:map[string]string{io.kubernetes.container.hash: 98cdb06c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cb930070e7d6e1ea4867c649aff3173f1433ef390032352666a2e71a23fcd0,PodSandboxId:5e4fda911a7a2e9811c3048f6276cd2453d949c6a6111b4f3e35f58817e6a661,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:
CONTAINER_RUNNING,CreatedAt:1721175998653196144,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7sd2x,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb27b9e7-2f58-4ceb-b978-c36e88e06724,},Annotations:map[string]string{io.kubernetes.container.hash: 2db43ed4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a0c16d156ac4f26606d741824826ef9651d013f8060c0a4f76cc1ef0f65c42,PodSandboxId:a0cdb9f22c37433ab387051e61c0c691567aa6b6a684240ad8ccca8abedcedc7,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46
891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721175981221455797,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-5nswx,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f7985a97-d5e5-4554-b699-b8a01e187c7e,},Annotations:map[string]string{io.kubernetes.container.hash: f8d9ce34,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3,PodSandboxId:981a8872fa005df2f1556778461f2d9ef2b23c48d2d81373545aa7003fd35930,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e
588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721175974627515033,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-ptnnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c732a54-ac1f-4d2b-8090-29a97aac2ca5,},Annotations:map[string]string{io.kubernetes.container.hash: 73ba5e1e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f11103cc7df3635c1fbf9fb47b6f3a51eea6db7e78b2157fc67acfc2ea48721,PodSandboxId:e2fb26f681873456611c0c1b2a6f351494c6e0ea4d0e328c4d190f14bbce5b4d,Metadata:&ContainerMetadata{Name:storage-prov
isioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721175927146862191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076c6e29-09df-469d-ae38-fe3a33503a57,},Annotations:map[string]string{io.kubernetes.container.hash: a3725e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e,PodSandboxId:889706a15ed043bce6e674ae47b9231f01559edb63b50514c8f794160938c8fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175924618196892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bpp2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4f8b36-6961-478d-bbe7-5aded14a13ea,},Annotations:map[string]string{io.kubernetes.container.hash: d83c619d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96ca
0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf,PodSandboxId:fe5d0c7e3456871b3b06ac1d05703e21521e75395b4bead2abbee531ac8a0692,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175921450104483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9j492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74949344-2223-4f8d-bc35-737de5d7f6e9,},Annotations:map[string]string{io.kubernetes.container.hash: 82657c1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c0125279f370bf439392702153b56afdf605
5462e7799832bb02d0fa3a1bd3,PodSandboxId:0353bc35834e7a57cf74175cd9d37c05e9a4c04013e53c59aea31ce7c9323adb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175901941504017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d17ebeaf4ff849d0ec1464ca5a7ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1
981c9f1f,PodSandboxId:9082a8c6c36587af3f03297c0748e95e6d3c07d477cad43e2a9f9ca82017872a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175901917599561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf035bf52a69f65af123fbdf3e000bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9851ce86beb538ffba87aaa958016675a8b3437813511
310f2cba6836816791,PodSandboxId:8e8f4d736d14524ec9ddd52fc06006289e179ed7d9400d8e115959ce227654b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175901850939448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47b84062fa6b411a724bed1aa03732a,},Annotations:map[string]string{io.kubernetes.container.hash: 841c18dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7,PodSandboxId:db2bb303a90cfc3
38e7ad665f9eb2392af5f30024759d91d97703673f4e69975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175901815159042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed91ad1ae7f2691bb897d09b2220db50,},Annotations:map[string]string{io.kubernetes.container.hash: 34b6a3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbe685c8-d6ae-4c90-b8a0-dc2a5151bb53 name=/runtime.v1.RuntimeService/ListCo
ntainers
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.695560632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36ccf86d-32af-444e-a4c7-5046b03e6e08 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.695655802Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36ccf86d-32af-444e-a4c7-5046b03e6e08 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.696932819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a68edde5-082a-4fef-a9e6-092700bca69b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.699080622Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721176420699044381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a68edde5-082a-4fef-a9e6-092700bca69b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.700249379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf6195b9-1be5-47ec-866d-e59ab1aa311b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.700330098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf6195b9-1be5-47ec-866d-e59ab1aa311b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.700944877Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e72d83e9b5c9d82b9f6b3d597d215164e0beebd5b195cf258344c9306c869b26,PodSandboxId:79c01f50e1dd893e76ebb15ed5073bb62f5e021c6ed2c8f574d6629cf7446d6a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721176257409503441,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-7dd2l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 83fa3d37-3105-471c-845f-7da9033760e7,},Annotations:map[string]string{io.kubernetes.container.hash: efd60ad4,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d04a55af0f41650383d558f5ead4f195d48bc0df1c6870b8e3d67fbdd8d7b2,PodSandboxId:2cee3fd0c167104c8b363d045beed2e397e7327f82428cf5917ee6176cc245cb,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721176116599378585,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30ead786-c960-40a8-a321-2f7f774d10f6,},Annotations:map[string]string{io.kubernet
es.container.hash: 8b8e0d1c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f231f824b96088f6892f30b23977945ec76b3f3f15ea1d13e33facbf2f421c,PodSandboxId:fa53b749e3d22bca71a68cac7665772627c0c77fac88f24097f8ae136bbcdbbd,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721176097247719519,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-7xwpc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: f011f7f1-0d18-4279-850b-076dcfcd6908,},Annotations:map[string]string{io.kubernetes.container.hash: f82954c0,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94408e45cb997f29a2489c6e0dde799186ae8e7b7d275d919d91531c55b579da,PodSandboxId:0c8a3036510204fe1af0c73baae3ec9b0477d4b24b426484d16441c3f6748311,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721176077898809247,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-q2bzh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0e27fbdd-f0bc-44e2-a3e0-097e976a4a65,},Annotations:map[string]string{io.kubernetes.container.hash: 98cdb06c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cb930070e7d6e1ea4867c649aff3173f1433ef390032352666a2e71a23fcd0,PodSandboxId:5e4fda911a7a2e9811c3048f6276cd2453d949c6a6111b4f3e35f58817e6a661,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:
CONTAINER_RUNNING,CreatedAt:1721175998653196144,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7sd2x,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: bb27b9e7-2f58-4ceb-b978-c36e88e06724,},Annotations:map[string]string{io.kubernetes.container.hash: 2db43ed4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a0c16d156ac4f26606d741824826ef9651d013f8060c0a4f76cc1ef0f65c42,PodSandboxId:a0cdb9f22c37433ab387051e61c0c691567aa6b6a684240ad8ccca8abedcedc7,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46
891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721175981221455797,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-5nswx,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: f7985a97-d5e5-4554-b699-b8a01e187c7e,},Annotations:map[string]string{io.kubernetes.container.hash: f8d9ce34,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3,PodSandboxId:981a8872fa005df2f1556778461f2d9ef2b23c48d2d81373545aa7003fd35930,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e
588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721175974627515033,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-ptnnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c732a54-ac1f-4d2b-8090-29a97aac2ca5,},Annotations:map[string]string{io.kubernetes.container.hash: 73ba5e1e,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f11103cc7df3635c1fbf9fb47b6f3a51eea6db7e78b2157fc67acfc2ea48721,PodSandboxId:e2fb26f681873456611c0c1b2a6f351494c6e0ea4d0e328c4d190f14bbce5b4d,Metadata:&ContainerMetadata{Name:storage-prov
isioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721175927146862191,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 076c6e29-09df-469d-ae38-fe3a33503a57,},Annotations:map[string]string{io.kubernetes.container.hash: a3725e87,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e,PodSandboxId:889706a15ed043bce6e674ae47b9231f01559edb63b50514c8f794160938c8fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721175924618196892,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bpp2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d4f8b36-6961-478d-bbe7-5aded14a13ea,},Annotations:map[string]string{io.kubernetes.container.hash: d83c619d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96ca
0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf,PodSandboxId:fe5d0c7e3456871b3b06ac1d05703e21521e75395b4bead2abbee531ac8a0692,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721175921450104483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9j492,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74949344-2223-4f8d-bc35-737de5d7f6e9,},Annotations:map[string]string{io.kubernetes.container.hash: 82657c1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69c0125279f370bf439392702153b56afdf605
5462e7799832bb02d0fa3a1bd3,PodSandboxId:0353bc35834e7a57cf74175cd9d37c05e9a4c04013e53c59aea31ce7c9323adb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721175901941504017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d17ebeaf4ff849d0ec1464ca5a7ba68,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1
981c9f1f,PodSandboxId:9082a8c6c36587af3f03297c0748e95e6d3c07d477cad43e2a9f9ca82017872a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721175901917599561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf035bf52a69f65af123fbdf3e000bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9851ce86beb538ffba87aaa958016675a8b3437813511
310f2cba6836816791,PodSandboxId:8e8f4d736d14524ec9ddd52fc06006289e179ed7d9400d8e115959ce227654b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721175901850939448,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b47b84062fa6b411a724bed1aa03732a,},Annotations:map[string]string{io.kubernetes.container.hash: 841c18dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7,PodSandboxId:db2bb303a90cfc3
38e7ad665f9eb2392af5f30024759d91d97703673f4e69975,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721175901815159042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-384227,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed91ad1ae7f2691bb897d09b2220db50,},Annotations:map[string]string{io.kubernetes.container.hash: 34b6a3d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf6195b9-1be5-47ec-866d-e59ab1aa311b name=/runtime.v1.RuntimeService/ListCo
ntainers
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.725180590Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3.O557Q2\"" file="server/server.go:805"
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.725254146Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3.O557Q2\"" file="server/server.go:805"
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.726420989Z" level=debug msg="Container or sandbox exited: df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3.O557Q2" file="server/server.go:810"
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.726481866Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3\"" file="server/server.go:805"
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.726520406Z" level=debug msg="Container or sandbox exited: df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3" file="server/server.go:810"
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.726555196Z" level=debug msg="container exited and found: df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3" file="server/server.go:825"
	Jul 17 00:33:40 addons-384227 crio[684]: time="2024-07-17 00:33:40.726612332Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3.O557Q2\"" file="server/server.go:805"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e72d83e9b5c9d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   79c01f50e1dd8       hello-world-app-6778b5fc9f-7dd2l
	d5d04a55af0f4       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   2cee3fd0c1671       nginx
	f2f231f824b96       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   fa53b749e3d22       headlamp-7867546754-7xwpc
	94408e45cb997       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   0c8a303651020       gcp-auth-5db96cd9b4-q2bzh
	77cb930070e7d       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   5e4fda911a7a2       local-path-provisioner-8d985888d-7sd2x
	37a0c16d156ac       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         7 minutes ago       Running             yakd                      0                   a0cdb9f22c374       yakd-dashboard-799879c74f-5nswx
	df35c92d87069       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   981a8872fa005       metrics-server-c59844bb4-ptnnk
	6f11103cc7df3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   e2fb26f681873       storage-provisioner
	0209929ebeb61       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   889706a15ed04       coredns-7db6d8ff4d-bpp2w
	a96ca0d1a5578       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                        8 minutes ago       Running             kube-proxy                0                   fe5d0c7e34568       kube-proxy-9j492
	69c0125279f37       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                        8 minutes ago       Running             kube-scheduler            0                   0353bc35834e7       kube-scheduler-addons-384227
	229ef064e998c       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                        8 minutes ago       Running             kube-controller-manager   0                   9082a8c6c3658       kube-controller-manager-addons-384227
	b9851ce86beb5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   8e8f4d736d145       etcd-addons-384227
	da60884c96d88       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                        8 minutes ago       Running             kube-apiserver            0                   db2bb303a90cf       kube-apiserver-addons-384227
	
	
	==> coredns [0209929ebeb61091f1d76e312538bd044e17057c0361b290faa389de1f5c045e] <==
	[INFO] 10.244.0.8:55027 - 54143 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000466024s
	[INFO] 10.244.0.8:43443 - 33241 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095988s
	[INFO] 10.244.0.8:43443 - 468 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000211613s
	[INFO] 10.244.0.8:54113 - 25447 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061211s
	[INFO] 10.244.0.8:54113 - 18021 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000223573s
	[INFO] 10.244.0.8:45863 - 29037 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000158485s
	[INFO] 10.244.0.8:45863 - 883 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000207196s
	[INFO] 10.244.0.8:40057 - 31269 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000072192s
	[INFO] 10.244.0.8:40057 - 64807 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00019205s
	[INFO] 10.244.0.8:59528 - 11336 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061484s
	[INFO] 10.244.0.8:59528 - 16206 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101821s
	[INFO] 10.244.0.8:43158 - 31253 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000034669s
	[INFO] 10.244.0.8:43158 - 30743 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000024158s
	[INFO] 10.244.0.8:35027 - 21443 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000031607s
	[INFO] 10.244.0.8:35027 - 48833 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049999s
	[INFO] 10.244.0.22:35812 - 19701 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000617312s
	[INFO] 10.244.0.22:34451 - 48891 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000861602s
	[INFO] 10.244.0.22:45930 - 8800 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106058s
	[INFO] 10.244.0.22:38576 - 65027 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000088855s
	[INFO] 10.244.0.22:39604 - 20596 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000074083s
	[INFO] 10.244.0.22:45808 - 11643 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059131s
	[INFO] 10.244.0.22:46166 - 62943 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000328867s
	[INFO] 10.244.0.22:57922 - 47835 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000620649s
	[INFO] 10.244.0.25:51925 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000198131s
	[INFO] 10.244.0.25:48527 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134125s
	
	
	==> describe nodes <==
	Name:               addons-384227
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-384227
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=addons-384227
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_25_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-384227
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:25:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-384227
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:33:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:31:14 +0000   Wed, 17 Jul 2024 00:25:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:31:14 +0000   Wed, 17 Jul 2024 00:25:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:31:14 +0000   Wed, 17 Jul 2024 00:25:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:31:14 +0000   Wed, 17 Jul 2024 00:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    addons-384227
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 20653670fc6a410a9a9044868b0bb2a1
	  System UUID:                20653670-fc6a-410a-9a90-44868b0bb2a1
	  Boot ID:                    0f945363-fed3-48df-9f85-333a27814996
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-7dd2l          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  gcp-auth                    gcp-auth-5db96cd9b4-q2bzh                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  headlamp                    headlamp-7867546754-7xwpc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 coredns-7db6d8ff4d-bpp2w                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m21s
	  kube-system                 etcd-addons-384227                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-apiserver-addons-384227              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 kube-controller-manager-addons-384227     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 kube-proxy-9j492                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-scheduler-addons-384227              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  local-path-storage          local-path-provisioner-8d985888d-7sd2x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  yakd-dashboard              yakd-dashboard-799879c74f-5nswx           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m40s (x8 over 8m40s)  kubelet          Node addons-384227 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m40s (x8 over 8m40s)  kubelet          Node addons-384227 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m40s (x7 over 8m40s)  kubelet          Node addons-384227 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m35s                  kubelet          Node addons-384227 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s                  kubelet          Node addons-384227 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s                  kubelet          Node addons-384227 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m33s                  kubelet          Node addons-384227 status is now: NodeReady
	  Normal  RegisteredNode           8m22s                  node-controller  Node addons-384227 event: Registered Node addons-384227 in Controller
	
	
	==> dmesg <==
	[  +8.586109] systemd-fstab-generator[1490]: Ignoring "noauto" option for root device
	[  +5.205811] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.012566] kauditd_printk_skb: 167 callbacks suppressed
	[  +6.741485] kauditd_printk_skb: 52 callbacks suppressed
	[Jul17 00:26] kauditd_printk_skb: 4 callbacks suppressed
	[ +32.069673] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.224653] kauditd_printk_skb: 23 callbacks suppressed
	[Jul17 00:27] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.098996] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.750463] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.562253] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.574850] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.345460] kauditd_printk_skb: 15 callbacks suppressed
	[Jul17 00:28] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.190883] kauditd_printk_skb: 55 callbacks suppressed
	[  +7.068065] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.104805] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.215283] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.036573] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.045936] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.491832] kauditd_printk_skb: 4 callbacks suppressed
	[Jul17 00:29] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.357291] kauditd_printk_skb: 33 callbacks suppressed
	[Jul17 00:30] kauditd_printk_skb: 6 callbacks suppressed
	[Jul17 00:33] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [b9851ce86beb538ffba87aaa958016675a8b3437813511310f2cba6836816791] <==
	{"level":"warn","ts":"2024-07-17T00:27:11.2174Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.185323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85556"}
	{"level":"info","ts":"2024-07-17T00:27:11.217476Z","caller":"traceutil/trace.go:171","msg":"trace[1675704464] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1167; }","duration":"126.286912ms","start":"2024-07-17T00:27:11.09118Z","end":"2024-07-17T00:27:11.217467Z","steps":["trace[1675704464] 'agreement among raft nodes before linearized reading'  (duration: 125.771029ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:27:50.126375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.036251ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-07-17T00:27:50.126845Z","caller":"traceutil/trace.go:171","msg":"trace[182827127] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1255; }","duration":"331.526382ms","start":"2024-07-17T00:27:49.795294Z","end":"2024-07-17T00:27:50.12682Z","steps":["trace[182827127] 'range keys from in-memory index tree'  (duration: 330.904651ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:27:50.126901Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.682327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:27:50.126954Z","caller":"traceutil/trace.go:171","msg":"trace[1815284874] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1255; }","duration":"146.769839ms","start":"2024-07-17T00:27:49.980175Z","end":"2024-07-17T00:27:50.126945Z","steps":["trace[1815284874] 'count revisions from in-memory index tree'  (duration: 146.591172ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:27:50.12693Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:27:49.795282Z","time spent":"331.625378ms","remote":"127.0.0.1:52714","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4391,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-17T00:27:50.127163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.712898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-17T00:27:50.127203Z","caller":"traceutil/trace.go:171","msg":"trace[1165630251] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1255; }","duration":"143.751733ms","start":"2024-07-17T00:27:49.983444Z","end":"2024-07-17T00:27:50.127195Z","steps":["trace[1165630251] 'range keys from in-memory index tree'  (duration: 143.632386ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:27:57.450921Z","caller":"traceutil/trace.go:171","msg":"trace[1979275742] linearizableReadLoop","detail":"{readStateIndex:1329; appliedIndex:1328; }","duration":"200.542865ms","start":"2024-07-17T00:27:57.250363Z","end":"2024-07-17T00:27:57.450906Z","steps":["trace[1979275742] 'read index received'  (duration: 200.390653ms)","trace[1979275742] 'applied index is now lower than readState.Index'  (duration: 151.754µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:27:57.451075Z","caller":"traceutil/trace.go:171","msg":"trace[63194154] transaction","detail":"{read_only:false; response_revision:1278; number_of_response:1; }","duration":"449.856243ms","start":"2024-07-17T00:27:57.001211Z","end":"2024-07-17T00:27:57.451067Z","steps":["trace[63194154] 'process raft request'  (duration: 449.592836ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:27:57.451161Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:27:57.001196Z","time spent":"449.904812ms","remote":"127.0.0.1:52772","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1258 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-07-17T00:27:57.451377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.222561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-07-17T00:27:57.452071Z","caller":"traceutil/trace.go:171","msg":"trace[1935906043] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1278; }","duration":"156.918808ms","start":"2024-07-17T00:27:57.295142Z","end":"2024-07-17T00:27:57.45206Z","steps":["trace[1935906043] 'agreement among raft nodes before linearized reading'  (duration: 156.144783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:27:57.451514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.145784ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-07-17T00:27:57.452574Z","caller":"traceutil/trace.go:171","msg":"trace[917680487] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1278; }","duration":"202.226302ms","start":"2024-07-17T00:27:57.250338Z","end":"2024-07-17T00:27:57.452564Z","steps":["trace[917680487] 'agreement among raft nodes before linearized reading'  (duration: 201.077172ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:28:15.832276Z","caller":"traceutil/trace.go:171","msg":"trace[906998417] transaction","detail":"{read_only:false; response_revision:1415; number_of_response:1; }","duration":"142.631563ms","start":"2024-07-17T00:28:15.689626Z","end":"2024-07-17T00:28:15.832258Z","steps":["trace[906998417] 'process raft request'  (duration: 142.434198ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:28:44.782786Z","caller":"traceutil/trace.go:171","msg":"trace[109758002] linearizableReadLoop","detail":"{readStateIndex:1673; appliedIndex:1672; }","duration":"295.246877ms","start":"2024-07-17T00:28:44.487496Z","end":"2024-07-17T00:28:44.782743Z","steps":["trace[109758002] 'read index received'  (duration: 295.083312ms)","trace[109758002] 'applied index is now lower than readState.Index'  (duration: 163.085µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:28:44.782912Z","caller":"traceutil/trace.go:171","msg":"trace[1149736703] transaction","detail":"{read_only:false; response_revision:1607; number_of_response:1; }","duration":"358.084269ms","start":"2024-07-17T00:28:44.424819Z","end":"2024-07-17T00:28:44.782903Z","steps":["trace[1149736703] 'process raft request'  (duration: 357.79811ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:28:44.783085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:28:44.424804Z","time spent":"358.129455ms","remote":"127.0.0.1:52772","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1599 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-07-17T00:28:44.783095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.580594ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6032"}
	{"level":"info","ts":"2024-07-17T00:28:44.783141Z","caller":"traceutil/trace.go:171","msg":"trace[687530247] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1607; }","duration":"197.649579ms","start":"2024-07-17T00:28:44.58548Z","end":"2024-07-17T00:28:44.78313Z","steps":["trace[687530247] 'agreement among raft nodes before linearized reading'  (duration: 197.485575ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:28:44.783253Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"295.756293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:28:44.783267Z","caller":"traceutil/trace.go:171","msg":"trace[1490239454] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1607; }","duration":"295.770618ms","start":"2024-07-17T00:28:44.487492Z","end":"2024-07-17T00:28:44.783262Z","steps":["trace[1490239454] 'agreement among raft nodes before linearized reading'  (duration: 295.746801ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:29:20.401931Z","caller":"traceutil/trace.go:171","msg":"trace[748204291] transaction","detail":"{read_only:false; response_revision:1708; number_of_response:1; }","duration":"204.71526ms","start":"2024-07-17T00:29:20.197167Z","end":"2024-07-17T00:29:20.401883Z","steps":["trace[748204291] 'process raft request'  (duration: 204.35111ms)"],"step_count":1}
	
	
	==> gcp-auth [94408e45cb997f29a2489c6e0dde799186ae8e7b7d275d919d91531c55b579da] <==
	2024/07/17 00:27:58 GCP Auth Webhook started!
	2024/07/17 00:28:04 Ready to marshal response ...
	2024/07/17 00:28:04 Ready to write response ...
	2024/07/17 00:28:04 Ready to marshal response ...
	2024/07/17 00:28:04 Ready to write response ...
	2024/07/17 00:28:06 Ready to marshal response ...
	2024/07/17 00:28:06 Ready to write response ...
	2024/07/17 00:28:06 Ready to marshal response ...
	2024/07/17 00:28:06 Ready to write response ...
	2024/07/17 00:28:06 Ready to marshal response ...
	2024/07/17 00:28:06 Ready to write response ...
	2024/07/17 00:28:09 Ready to marshal response ...
	2024/07/17 00:28:09 Ready to write response ...
	2024/07/17 00:28:11 Ready to marshal response ...
	2024/07/17 00:28:11 Ready to write response ...
	2024/07/17 00:28:30 Ready to marshal response ...
	2024/07/17 00:28:30 Ready to write response ...
	2024/07/17 00:28:31 Ready to marshal response ...
	2024/07/17 00:28:31 Ready to write response ...
	2024/07/17 00:28:37 Ready to marshal response ...
	2024/07/17 00:28:37 Ready to write response ...
	2024/07/17 00:29:12 Ready to marshal response ...
	2024/07/17 00:29:12 Ready to write response ...
	2024/07/17 00:30:53 Ready to marshal response ...
	2024/07/17 00:30:53 Ready to write response ...
	
	
	==> kernel <==
	 00:33:41 up 9 min,  0 users,  load average: 0.20, 0.67, 0.52
	Linux addons-384227 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [da60884c96d8871a1838a9d477599b4f53460717b2c22a9d1f34fc3e2e2441d7] <==
	W0717 00:27:16.724572       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 00:27:16.724620       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0717 00:27:16.725248       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.53.214:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.53.214:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.53.214:443: connect: connection refused
	I0717 00:27:16.786890       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 00:28:06.282963       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.176.27"}
	I0717 00:28:30.863017       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 00:28:31.065868       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.25.103"}
	I0717 00:28:31.204187       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 00:28:32.243857       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 00:28:53.647883       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0717 00:29:28.607937       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:29:28.608121       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:29:28.701715       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:29:28.701795       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:29:28.722578       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:29:28.722688       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:29:28.736253       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:29:28.736303       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:29:28.772744       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:29:28.772893       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 00:29:29.737219       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 00:29:29.773749       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 00:29:29.778400       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 00:30:53.564620       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.95.48"}
	
	
	==> kube-controller-manager [229ef064e998c40ee955c3c926ebfdf617c511c0364d3e5a2c92bed1981c9f1f] <==
	W0717 00:31:54.119063       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:31:54.119190       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:31:58.117742       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:31:58.117839       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:32:12.390146       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:32:12.390231       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:32:24.128687       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:32:24.128750       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:32:28.266643       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:32:28.266738       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:32:41.384527       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:32:41.384639       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:32:56.833196       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:32:56.833297       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:33:02.698918       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:33:02.699119       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:33:02.739712       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:33:02.739817       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:33:27.107120       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:33:27.107224       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:33:37.592122       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:33:37.592234       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 00:33:39.591549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="10.189µs"
	W0717 00:33:40.123334       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:33:40.123392       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [a96ca0d1a55782b6e69e5d9d86a20d24d27540902a999fe95fb24e1ea41d7dcf] <==
	I0717 00:25:22.337633       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:25:22.378491       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.177"]
	I0717 00:25:22.478854       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:25:22.478894       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:25:22.478916       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:25:22.488357       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:25:22.488520       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:25:22.488530       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:25:22.492720       1 config.go:192] "Starting service config controller"
	I0717 00:25:22.492737       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:25:22.492766       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:25:22.492773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:25:22.493358       1 config.go:319] "Starting node config controller"
	I0717 00:25:22.493366       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:25:22.593467       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:25:22.593525       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:25:22.593738       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [69c0125279f370bf439392702153b56afdf6055462e7799832bb02d0fa3a1bd3] <==
	E0717 00:25:04.366475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:25:04.366563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:25:04.366589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:25:04.367052       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:25:05.347816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:25:05.347879       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:25:05.489963       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:25:05.490056       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:25:05.504394       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:25:05.504720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:25:05.527057       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:25:05.527158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:25:05.581360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:25:05.581416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:25:05.581500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:25:05.581533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:25:05.586677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:25:05.586779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:25:05.615836       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:25:05.616534       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:25:05.643344       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:25:05.643387       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:25:05.668678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:25:05.668715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 00:25:08.148619       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:31:06 addons-384227 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:31:06 addons-384227 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:31:06 addons-384227 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:31:07 addons-384227 kubelet[1279]: I0717 00:31:07.926584    1279 scope.go:117] "RemoveContainer" containerID="8e294a534ddd7e802dd02f7ca5190a66a5510c9e6da657fe4b895f1c04eb2e49"
	Jul 17 00:31:07 addons-384227 kubelet[1279]: I0717 00:31:07.949207    1279 scope.go:117] "RemoveContainer" containerID="ca0e3bad5e6e935c0ca1502bef8e0da51a05241d9980a5229819c14f5bc61854"
	Jul 17 00:32:06 addons-384227 kubelet[1279]: E0717 00:32:06.937774    1279 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:32:06 addons-384227 kubelet[1279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:32:06 addons-384227 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:32:06 addons-384227 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:32:06 addons-384227 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:33:06 addons-384227 kubelet[1279]: E0717 00:33:06.936489    1279 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:33:06 addons-384227 kubelet[1279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:33:06 addons-384227 kubelet[1279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:33:06 addons-384227 kubelet[1279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:33:06 addons-384227 kubelet[1279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:33:41 addons-384227 kubelet[1279]: I0717 00:33:41.049098    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsgl9\" (UniqueName: \"kubernetes.io/projected/3c732a54-ac1f-4d2b-8090-29a97aac2ca5-kube-api-access-bsgl9\") pod \"3c732a54-ac1f-4d2b-8090-29a97aac2ca5\" (UID: \"3c732a54-ac1f-4d2b-8090-29a97aac2ca5\") "
	Jul 17 00:33:41 addons-384227 kubelet[1279]: I0717 00:33:41.049155    1279 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3c732a54-ac1f-4d2b-8090-29a97aac2ca5-tmp-dir\") pod \"3c732a54-ac1f-4d2b-8090-29a97aac2ca5\" (UID: \"3c732a54-ac1f-4d2b-8090-29a97aac2ca5\") "
	Jul 17 00:33:41 addons-384227 kubelet[1279]: I0717 00:33:41.049511    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c732a54-ac1f-4d2b-8090-29a97aac2ca5-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3c732a54-ac1f-4d2b-8090-29a97aac2ca5" (UID: "3c732a54-ac1f-4d2b-8090-29a97aac2ca5"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 17 00:33:41 addons-384227 kubelet[1279]: I0717 00:33:41.055759    1279 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c732a54-ac1f-4d2b-8090-29a97aac2ca5-kube-api-access-bsgl9" (OuterVolumeSpecName: "kube-api-access-bsgl9") pod "3c732a54-ac1f-4d2b-8090-29a97aac2ca5" (UID: "3c732a54-ac1f-4d2b-8090-29a97aac2ca5"). InnerVolumeSpecName "kube-api-access-bsgl9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:33:41 addons-384227 kubelet[1279]: I0717 00:33:41.150130    1279 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bsgl9\" (UniqueName: \"kubernetes.io/projected/3c732a54-ac1f-4d2b-8090-29a97aac2ca5-kube-api-access-bsgl9\") on node \"addons-384227\" DevicePath \"\""
	Jul 17 00:33:41 addons-384227 kubelet[1279]: I0717 00:33:41.150173    1279 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3c732a54-ac1f-4d2b-8090-29a97aac2ca5-tmp-dir\") on node \"addons-384227\" DevicePath \"\""
	Jul 17 00:33:41 addons-384227 kubelet[1279]: I0717 00:33:41.268751    1279 scope.go:117] "RemoveContainer" containerID="df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3"
	Jul 17 00:33:41 addons-384227 kubelet[1279]: I0717 00:33:41.317307    1279 scope.go:117] "RemoveContainer" containerID="df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3"
	Jul 17 00:33:41 addons-384227 kubelet[1279]: E0717 00:33:41.319145    1279 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3\": container with ID starting with df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3 not found: ID does not exist" containerID="df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3"
	Jul 17 00:33:41 addons-384227 kubelet[1279]: I0717 00:33:41.319369    1279 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3"} err="failed to get container status \"df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3\": rpc error: code = NotFound desc = could not find container \"df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3\": container with ID starting with df35c92d870696b4fe64a2971d0e6ce353a4e9bbb8e8a9b474d874eb3d6f1ca3 not found: ID does not exist"
	
	
	==> storage-provisioner [6f11103cc7df3635c1fbf9fb47b6f3a51eea6db7e78b2157fc67acfc2ea48721] <==
	I0717 00:25:28.523766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:25:28.613560       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:25:28.613636       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:25:28.648755       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:25:28.648899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-384227_6206cfad-ba50-4cc5-8cd6-74e5502921b1!
	I0717 00:25:28.648960       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff60ba14-39d3-4c95-a7ca-43d56f323290", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-384227_6206cfad-ba50-4cc5-8cd6-74e5502921b1 became leader
	I0717 00:25:28.957354       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-384227_6206cfad-ba50-4cc5-8cd6-74e5502921b1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-384227 -n addons-384227
helpers_test.go:261: (dbg) Run:  kubectl --context addons-384227 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (314.66s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-384227
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-384227: exit status 82 (2m0.446464834s)

                                                
                                                
-- stdout --
	* Stopping node "addons-384227"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-384227" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-384227
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-384227: exit status 11 (21.661485567s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-384227" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-384227
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-384227: exit status 11 (6.143498743s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-384227" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-384227
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-384227: exit status 11 (6.143945052s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-384227" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.40s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (190.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a95fd0f7-52f6-4dfc-aaa2-cee480c08370] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004865215s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-023523 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-023523 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-023523 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-023523 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-023523 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7841f197-8206-42b1-92cf-81e5d44443a0] Pending
E0717 00:40:42.223756   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [7841f197-8206-42b1-92cf-81e5d44443a0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolume "pvc-64c49e7f-d009-4750-9a6f-592d6dc98eeb" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "sp-pod" [7841f197-8206-42b1-92cf-81e5d44443a0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) unavailable due to one or more pvc(s) bound to non-existent pv(s). preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-023523 -n functional-023523
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-07-17 00:43:26.40926825 +0000 UTC m=+1305.457149363
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-023523 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-023523 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Image:        docker.io/nginx
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4mndx (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-4mndx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  2m31s (x2 over 2m33s)  default-scheduler  0/1 nodes are available: persistentvolume "pvc-64c49e7f-d009-4750-9a6f-592d6dc98eeb" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Warning  FailedScheduling  2m21s                  default-scheduler  0/1 nodes are available: 1 node(s) unavailable due to one or more pvc(s) bound to non-existent pv(s). preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-023523 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-023523 logs sp-pod -n default:
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-023523 -n functional-023523
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-023523 logs -n 25: (1.47621907s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-023523                                                        | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-023523                                                        | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-023523                                                        | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-023523                                                        | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh pgrep                                              | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-023523 image build -t                                         | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | localhost/my-image:functional-023523                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-023523 image ls                                               | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	| image          | functional-023523                                                        | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-023523                                                        | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh stat                                               | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | /mount-9p/created-by-test                                                |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh stat                                               | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh sudo                                               | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh findmnt                                            | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-023523                                                     | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port3318578296/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh findmnt                                            | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh -- ls                                              | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh sudo                                               | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-023523                                                     | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2353018226/001:/mount2   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh findmnt                                            | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-023523                                                     | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2353018226/001:/mount1   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-023523                                                     | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2353018226/001:/mount3   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh findmnt                                            | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh findmnt                                            | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-023523 ssh findmnt                                            | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC | 17 Jul 24 00:41 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-023523                                                     | functional-023523 | jenkins | v1.33.1 | 17 Jul 24 00:41 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:41:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:41:27.030446   21889 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:41:27.030533   21889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:41:27.030538   21889 out.go:304] Setting ErrFile to fd 2...
	I0717 00:41:27.030542   21889 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:41:27.030745   21889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:41:27.031235   21889 out.go:298] Setting JSON to false
	I0717 00:41:27.032173   21889 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1429,"bootTime":1721175458,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:41:27.032232   21889 start.go:139] virtualization: kvm guest
	I0717 00:41:27.034517   21889 out.go:177] * [functional-023523] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:41:27.036121   21889 notify.go:220] Checking for updates...
	I0717 00:41:27.036127   21889 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:41:27.037568   21889 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:41:27.039316   21889 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:41:27.040713   21889 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:41:27.041987   21889 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:41:27.043351   21889 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:41:27.045095   21889 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:41:27.045491   21889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:27.045539   21889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:27.064028   21889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0717 00:41:27.064466   21889 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:27.065046   21889 main.go:141] libmachine: Using API Version  1
	I0717 00:41:27.065071   21889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:27.065394   21889 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:27.066257   21889 main.go:141] libmachine: (functional-023523) Calling .DriverName
	I0717 00:41:27.066573   21889 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:41:27.067012   21889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:27.067057   21889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:27.085989   21889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0717 00:41:27.086408   21889 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:27.086972   21889 main.go:141] libmachine: Using API Version  1
	I0717 00:41:27.086992   21889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:27.087271   21889 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:27.087517   21889 main.go:141] libmachine: (functional-023523) Calling .DriverName
	I0717 00:41:27.124666   21889 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:41:27.126141   21889 start.go:297] selected driver: kvm2
	I0717 00:41:27.126175   21889 start.go:901] validating driver "kvm2" against &{Name:functional-023523 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-023523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:41:27.126289   21889 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:41:27.127552   21889 cni.go:84] Creating CNI manager for ""
	I0717 00:41:27.127572   21889 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:41:27.127649   21889 start.go:340] cluster config:
	{Name:functional-023523 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-023523 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:41:27.130063   21889 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.166934193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177007166908747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250732,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=186493fd-1576-4aac-ba34-63f3837cbdc5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.167728724Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e16cb844-78ac-4612-baf8-39c217d26d2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.167780341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e16cb844-78ac-4612-baf8-39c217d26d2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.168186214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9a606d5bb6109ebf52fd768b64f24be44e9bf7e39fd1211d356fd601a42bb72,PodSandboxId:d4670a074e59fb031e6c17958a10c2f11bf143ff8d30dfd8470d7dd7dae80125,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1721176902994895189,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-ckndh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 816e6cd2-2ed7-497e-a215-9661100fb415,},Annotations:map[string]string{io.kubernetes.container.has
h: 3dbb7c68,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6576513c4a31dcaea18db3e48507d497d345680ba3d782ea1177a95d703216,PodSandboxId:c0208ee3e0dee180aae97ed02b8352c2313bf0447604d63f7b910a13a5b11b59,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1721176895964847704,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-fd6gh,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f75595fa-b139-4964-9671-9abd33562d43,},Annotations:map[string]string{io.kubernetes.container.hash: a6f8f295,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115c97017c755803c1eb4b329bd91de58af8b6f58fb9ecf2cb216ab2ba03bdca,PodSandboxId:8b6fdb77f0dccceed5017d744b3bb38ceb8c27d63b449cca838d64ae8d434f84,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1721176891809775025,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2305f04-cb6a-472a-828d-4da88721a51c,},Annotations:map[string]string{io.kubernetes.container.hash: 8c15aed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e07db8dcac4db3a6cba643b818a3a06c1336ef768526708476b85ec78773c5,PodSandboxId:ec7dd7c34aba81194af28a607d9cba43d76ed4a59d64152155290d09873e1867,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1721176879039701246,Labels:map[string]string{io.kubern
etes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-cff5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82159e3d-9613-465f-936c-7b8d0ab66aad,},Annotations:map[string]string{io.kubernetes.container.hash: e4a93296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24e81bbd8185613a4ed35760f00768d8e279bf5b0828ce3490b323feb0eefbb,PodSandboxId:bec280a555aa86bea2e01cc1865bb3b69fa37840bdf5e230245d2dea4457c5a9,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1721176878953982764,Labels:map[string]string{i
o.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-t5ssg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f67a68e-2a30-4bcc-89c7-cd3c2059ac27,},Annotations:map[string]string{io.kubernetes.container.hash: 79d3bd47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f3d584245e99add69cd983bedc10e9ed3a3a32c650232ee2f40ef8cc294c3e,PodSandboxId:41d184b7b75f9221f55c968f851418b0c093d39708bd1923c5b8755d16c2e24b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1721176875189187841,Labels:map[string]
string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-l95r4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 160e20ce-943b-4253-9ed5-c7eced8fd388,},Annotations:map[string]string{io.kubernetes.container.hash: e8c28480,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1911e668601d42a4216fc8178cc05652e9b97ea735929e3f0fcc129c194bb95f,PodSandboxId:dfeff8a1adea2050e2759592f655d186d1733036497a91c793a5d476d129b84d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:
CONTAINER_RUNNING,CreatedAt:1721176797594944735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a737f7de401d8e766eed8680117d019a,},Annotations:map[string]string{io.kubernetes.container.hash: 12562423,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2516f66d8d07fcce188402ec254604a86851e265ccc82fab073fc9657fb4e8,PodSandboxId:636356dd1a6c5b6f6644668fd6d1734f0d6d47643192effd6222a22494517aff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721176772871270247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjbkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bdfe3f-8ffd-481b-957b-5614c55db709,},Annotations:map[string]string{io.kubernetes.container.hash: 1473a04b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acdc589ecd79cc98d5f3cae8f57b47a03938cc7212671bb3e05f2cf37f28f792,PodSandboxId:a0f603dfea18d0566d0505b5d72f290badf9b4a9aa66368ed2b7e4b2829dbae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176772887
025146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95fd0f7-52f6-4dfc-aaa2-cee480c08370,},Annotations:map[string]string{io.kubernetes.container.hash: 5dda6837,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306368814bbe3d95dee6d3b3acf799a9c5da78df71175df3023d28c4d09a8dd,PodSandboxId:d07129c110eaa51dcf734671153ebd13f1538b215f44805e2e6f3d693c4ead05,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176772864231234,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d291c533-2504-4d88-a868-aebd13fc2e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e43a08a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46722747ae61c016808f072be409082c74213644e7ce9053b90f17e4a4f4c05,PodSandboxId:dfeff8a1adea2050e2759592f655d186d1733036497a91c793a5d476d129b84d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176772806479295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a737f7de401d8e766eed8680117d019a,},Annotations:map[string]string{io.kubernetes.container.hash: 12562423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472a2bf1e530f31c5f3b38ba95281fe3834746b6b885997d7859fcadcad5d,PodSandboxId:f826269ceaebc0585f63d107be8021a0389be90f9ad1d3db0b3b617c47dc62c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176767351150020,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3152cf85e2441b1f6e75150cc6fcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1742d4101b2e909654b62157da7b68fa00186b4f0ee0eaaff36e61b4b66438,PodSandboxId:9baf2c1d74d8fd80202b57393a1cff9381c1a573614ae7951c3d394a6f1ad1da,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176764451298959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3361171e0e0bb74952d2e5ceb4abf9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3989091a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804eb9eab47e5cc0cda9b131547f3205dedd6daf322bb11a55afe58785f3e6b7,PodSandboxId:ca9630971a3f62517e09b290d11196c2fb7bbf422b2dee9f5794c7190ad06ae7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176764075151378,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 504a8f5b62466e419fdb4fc4dbe4e6d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:050a6ef5633b023ad48bd34c5a4b766b48a9226dac24ace1c0a75c81def686fd,PodSandboxId:d07129c110eaa51dcf734671153ebd13f1538b215f44805e2e6f3d693c4ead05,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176759599774206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d291c533-2504-4d88-a868-aebd13fc2e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e43a08a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6085c3bb04b664b38068ba6fa846689cbd98bebc6d716429c8a0d972c63ad89,PodSandboxId:13f70319954b60aa491cf43b2f5b98d3918c6e28c27de0eed142ac9dd135375b,Metadata:&ContainerMetad
ata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721176722824105863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjbkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bdfe3f-8ffd-481b-957b-5614c55db709,},Annotations:map[string]string{io.kubernetes.container.hash: 1473a04b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38a2c8af50a4b2ca3f3874a937953bc2d4f9127253e1b8d2b7dbabb7ff755c3,PodSandboxId:25623fe79adb85b196a7087f806bdfd2ac1305fcd3b4ee6539b95287fa5fa8c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attem
pt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176722831570760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95fd0f7-52f6-4dfc-aaa2-cee480c08370,},Annotations:map[string]string{io.kubernetes.container.hash: 5dda6837,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27035f1ad13efd1b2a239015c9be60acafcdd8b9a2eba07f4d1eda8dce63b0e6,PodSandboxId:aa007c9888abc3b89308750d1164965fc58ca16fc2525e479af30acad4ab06b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Imag
e:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721176718976007236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3361171e0e0bb74952d2e5ceb4abf9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3989091a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19304579b202ca1ad9ac19a30589de69489748b38c0c369874fc47ec2e009ba3,PodSandboxId:22a99b9a3d26fc3b7ae33f5f6f4c7cea096bb84bcbff724369310ceb7b966338,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb
96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176718977278923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 504a8f5b62466e419fdb4fc4dbe4e6d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6626d24fd162d867d1047572fde888420f168a64a164d9c777eb8add6e073ddd,PodSandboxId:f63528fe9acd8c11df088a050bc3a33a71ca95295df48097379d818cac12c255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e
9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721176718952915756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3152cf85e2441b1f6e75150cc6fcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e16cb844-78ac-4612-baf8-39c217d26d2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.213713560Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd222150-27c1-44be-a41b-363dcf023cc7 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.213796784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd222150-27c1-44be-a41b-363dcf023cc7 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.223175640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=16cef087-0808-408a-a82d-c044134695ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.223952694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177007223930356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250732,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16cef087-0808-408a-a82d-c044134695ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.224447921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=640653e3-f3d2-4166-9097-2579caf027f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.224574855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=640653e3-f3d2-4166-9097-2579caf027f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.224959977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9a606d5bb6109ebf52fd768b64f24be44e9bf7e39fd1211d356fd601a42bb72,PodSandboxId:d4670a074e59fb031e6c17958a10c2f11bf143ff8d30dfd8470d7dd7dae80125,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1721176902994895189,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-ckndh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 816e6cd2-2ed7-497e-a215-9661100fb415,},Annotations:map[string]string{io.kubernetes.container.has
h: 3dbb7c68,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6576513c4a31dcaea18db3e48507d497d345680ba3d782ea1177a95d703216,PodSandboxId:c0208ee3e0dee180aae97ed02b8352c2313bf0447604d63f7b910a13a5b11b59,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1721176895964847704,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-fd6gh,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f75595fa-b139-4964-9671-9abd33562d43,},Annotations:map[string]string{io.kubernetes.container.hash: a6f8f295,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115c97017c755803c1eb4b329bd91de58af8b6f58fb9ecf2cb216ab2ba03bdca,PodSandboxId:8b6fdb77f0dccceed5017d744b3bb38ceb8c27d63b449cca838d64ae8d434f84,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1721176891809775025,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2305f04-cb6a-472a-828d-4da88721a51c,},Annotations:map[string]string{io.kubernetes.container.hash: 8c15aed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e07db8dcac4db3a6cba643b818a3a06c1336ef768526708476b85ec78773c5,PodSandboxId:ec7dd7c34aba81194af28a607d9cba43d76ed4a59d64152155290d09873e1867,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1721176879039701246,Labels:map[string]string{io.kubern
etes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-cff5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82159e3d-9613-465f-936c-7b8d0ab66aad,},Annotations:map[string]string{io.kubernetes.container.hash: e4a93296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24e81bbd8185613a4ed35760f00768d8e279bf5b0828ce3490b323feb0eefbb,PodSandboxId:bec280a555aa86bea2e01cc1865bb3b69fa37840bdf5e230245d2dea4457c5a9,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1721176878953982764,Labels:map[string]string{i
o.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-t5ssg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f67a68e-2a30-4bcc-89c7-cd3c2059ac27,},Annotations:map[string]string{io.kubernetes.container.hash: 79d3bd47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f3d584245e99add69cd983bedc10e9ed3a3a32c650232ee2f40ef8cc294c3e,PodSandboxId:41d184b7b75f9221f55c968f851418b0c093d39708bd1923c5b8755d16c2e24b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1721176875189187841,Labels:map[string]
string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-l95r4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 160e20ce-943b-4253-9ed5-c7eced8fd388,},Annotations:map[string]string{io.kubernetes.container.hash: e8c28480,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1911e668601d42a4216fc8178cc05652e9b97ea735929e3f0fcc129c194bb95f,PodSandboxId:dfeff8a1adea2050e2759592f655d186d1733036497a91c793a5d476d129b84d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:
CONTAINER_RUNNING,CreatedAt:1721176797594944735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a737f7de401d8e766eed8680117d019a,},Annotations:map[string]string{io.kubernetes.container.hash: 12562423,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2516f66d8d07fcce188402ec254604a86851e265ccc82fab073fc9657fb4e8,PodSandboxId:636356dd1a6c5b6f6644668fd6d1734f0d6d47643192effd6222a22494517aff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721176772871270247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjbkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bdfe3f-8ffd-481b-957b-5614c55db709,},Annotations:map[string]string{io.kubernetes.container.hash: 1473a04b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acdc589ecd79cc98d5f3cae8f57b47a03938cc7212671bb3e05f2cf37f28f792,PodSandboxId:a0f603dfea18d0566d0505b5d72f290badf9b4a9aa66368ed2b7e4b2829dbae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176772887
025146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95fd0f7-52f6-4dfc-aaa2-cee480c08370,},Annotations:map[string]string{io.kubernetes.container.hash: 5dda6837,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306368814bbe3d95dee6d3b3acf799a9c5da78df71175df3023d28c4d09a8dd,PodSandboxId:d07129c110eaa51dcf734671153ebd13f1538b215f44805e2e6f3d693c4ead05,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176772864231234,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d291c533-2504-4d88-a868-aebd13fc2e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e43a08a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46722747ae61c016808f072be409082c74213644e7ce9053b90f17e4a4f4c05,PodSandboxId:dfeff8a1adea2050e2759592f655d186d1733036497a91c793a5d476d129b84d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176772806479295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a737f7de401d8e766eed8680117d019a,},Annotations:map[string]string{io.kubernetes.container.hash: 12562423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472a2bf1e530f31c5f3b38ba95281fe3834746b6b885997d7859fcadcad5d,PodSandboxId:f826269ceaebc0585f63d107be8021a0389be90f9ad1d3db0b3b617c47dc62c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176767351150020,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3152cf85e2441b1f6e75150cc6fcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1742d4101b2e909654b62157da7b68fa00186b4f0ee0eaaff36e61b4b66438,PodSandboxId:9baf2c1d74d8fd80202b57393a1cff9381c1a573614ae7951c3d394a6f1ad1da,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176764451298959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3361171e0e0bb74952d2e5ceb4abf9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3989091a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804eb9eab47e5cc0cda9b131547f3205dedd6daf322bb11a55afe58785f3e6b7,PodSandboxId:ca9630971a3f62517e09b290d11196c2fb7bbf422b2dee9f5794c7190ad06ae7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176764075151378,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 504a8f5b62466e419fdb4fc4dbe4e6d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:050a6ef5633b023ad48bd34c5a4b766b48a9226dac24ace1c0a75c81def686fd,PodSandboxId:d07129c110eaa51dcf734671153ebd13f1538b215f44805e2e6f3d693c4ead05,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176759599774206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d291c533-2504-4d88-a868-aebd13fc2e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e43a08a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6085c3bb04b664b38068ba6fa846689cbd98bebc6d716429c8a0d972c63ad89,PodSandboxId:13f70319954b60aa491cf43b2f5b98d3918c6e28c27de0eed142ac9dd135375b,Metadata:&ContainerMetad
ata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721176722824105863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjbkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bdfe3f-8ffd-481b-957b-5614c55db709,},Annotations:map[string]string{io.kubernetes.container.hash: 1473a04b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38a2c8af50a4b2ca3f3874a937953bc2d4f9127253e1b8d2b7dbabb7ff755c3,PodSandboxId:25623fe79adb85b196a7087f806bdfd2ac1305fcd3b4ee6539b95287fa5fa8c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attem
pt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176722831570760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95fd0f7-52f6-4dfc-aaa2-cee480c08370,},Annotations:map[string]string{io.kubernetes.container.hash: 5dda6837,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27035f1ad13efd1b2a239015c9be60acafcdd8b9a2eba07f4d1eda8dce63b0e6,PodSandboxId:aa007c9888abc3b89308750d1164965fc58ca16fc2525e479af30acad4ab06b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Imag
e:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721176718976007236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3361171e0e0bb74952d2e5ceb4abf9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3989091a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19304579b202ca1ad9ac19a30589de69489748b38c0c369874fc47ec2e009ba3,PodSandboxId:22a99b9a3d26fc3b7ae33f5f6f4c7cea096bb84bcbff724369310ceb7b966338,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb
96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176718977278923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 504a8f5b62466e419fdb4fc4dbe4e6d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6626d24fd162d867d1047572fde888420f168a64a164d9c777eb8add6e073ddd,PodSandboxId:f63528fe9acd8c11df088a050bc3a33a71ca95295df48097379d818cac12c255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e
9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721176718952915756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3152cf85e2441b1f6e75150cc6fcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=640653e3-f3d2-4166-9097-2579caf027f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.259494518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23af6f7b-b3eb-443f-b72f-92ba59cacbb2 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.259584868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23af6f7b-b3eb-443f-b72f-92ba59cacbb2 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.261643861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=872b75be-4bcf-43db-b6e5-2f12fe9e455f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.262353507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177007262327975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250732,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=872b75be-4bcf-43db-b6e5-2f12fe9e455f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.263104691Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=137a53bd-10d7-4c61-90db-38a43b17496a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.263166441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=137a53bd-10d7-4c61-90db-38a43b17496a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.263739770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9a606d5bb6109ebf52fd768b64f24be44e9bf7e39fd1211d356fd601a42bb72,PodSandboxId:d4670a074e59fb031e6c17958a10c2f11bf143ff8d30dfd8470d7dd7dae80125,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1721176902994895189,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-ckndh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 816e6cd2-2ed7-497e-a215-9661100fb415,},Annotations:map[string]string{io.kubernetes.container.has
h: 3dbb7c68,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6576513c4a31dcaea18db3e48507d497d345680ba3d782ea1177a95d703216,PodSandboxId:c0208ee3e0dee180aae97ed02b8352c2313bf0447604d63f7b910a13a5b11b59,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1721176895964847704,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-fd6gh,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f75595fa-b139-4964-9671-9abd33562d43,},Annotations:map[string]string{io.kubernetes.container.hash: a6f8f295,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115c97017c755803c1eb4b329bd91de58af8b6f58fb9ecf2cb216ab2ba03bdca,PodSandboxId:8b6fdb77f0dccceed5017d744b3bb38ceb8c27d63b449cca838d64ae8d434f84,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1721176891809775025,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2305f04-cb6a-472a-828d-4da88721a51c,},Annotations:map[string]string{io.kubernetes.container.hash: 8c15aed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e07db8dcac4db3a6cba643b818a3a06c1336ef768526708476b85ec78773c5,PodSandboxId:ec7dd7c34aba81194af28a607d9cba43d76ed4a59d64152155290d09873e1867,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1721176879039701246,Labels:map[string]string{io.kubern
etes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-cff5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82159e3d-9613-465f-936c-7b8d0ab66aad,},Annotations:map[string]string{io.kubernetes.container.hash: e4a93296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24e81bbd8185613a4ed35760f00768d8e279bf5b0828ce3490b323feb0eefbb,PodSandboxId:bec280a555aa86bea2e01cc1865bb3b69fa37840bdf5e230245d2dea4457c5a9,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1721176878953982764,Labels:map[string]string{i
o.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-t5ssg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f67a68e-2a30-4bcc-89c7-cd3c2059ac27,},Annotations:map[string]string{io.kubernetes.container.hash: 79d3bd47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f3d584245e99add69cd983bedc10e9ed3a3a32c650232ee2f40ef8cc294c3e,PodSandboxId:41d184b7b75f9221f55c968f851418b0c093d39708bd1923c5b8755d16c2e24b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1721176875189187841,Labels:map[string]
string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-l95r4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 160e20ce-943b-4253-9ed5-c7eced8fd388,},Annotations:map[string]string{io.kubernetes.container.hash: e8c28480,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1911e668601d42a4216fc8178cc05652e9b97ea735929e3f0fcc129c194bb95f,PodSandboxId:dfeff8a1adea2050e2759592f655d186d1733036497a91c793a5d476d129b84d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:
CONTAINER_RUNNING,CreatedAt:1721176797594944735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a737f7de401d8e766eed8680117d019a,},Annotations:map[string]string{io.kubernetes.container.hash: 12562423,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2516f66d8d07fcce188402ec254604a86851e265ccc82fab073fc9657fb4e8,PodSandboxId:636356dd1a6c5b6f6644668fd6d1734f0d6d47643192effd6222a22494517aff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721176772871270247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjbkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bdfe3f-8ffd-481b-957b-5614c55db709,},Annotations:map[string]string{io.kubernetes.container.hash: 1473a04b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acdc589ecd79cc98d5f3cae8f57b47a03938cc7212671bb3e05f2cf37f28f792,PodSandboxId:a0f603dfea18d0566d0505b5d72f290badf9b4a9aa66368ed2b7e4b2829dbae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176772887
025146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95fd0f7-52f6-4dfc-aaa2-cee480c08370,},Annotations:map[string]string{io.kubernetes.container.hash: 5dda6837,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306368814bbe3d95dee6d3b3acf799a9c5da78df71175df3023d28c4d09a8dd,PodSandboxId:d07129c110eaa51dcf734671153ebd13f1538b215f44805e2e6f3d693c4ead05,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176772864231234,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d291c533-2504-4d88-a868-aebd13fc2e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e43a08a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46722747ae61c016808f072be409082c74213644e7ce9053b90f17e4a4f4c05,PodSandboxId:dfeff8a1adea2050e2759592f655d186d1733036497a91c793a5d476d129b84d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176772806479295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a737f7de401d8e766eed8680117d019a,},Annotations:map[string]string{io.kubernetes.container.hash: 12562423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472a2bf1e530f31c5f3b38ba95281fe3834746b6b885997d7859fcadcad5d,PodSandboxId:f826269ceaebc0585f63d107be8021a0389be90f9ad1d3db0b3b617c47dc62c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176767351150020,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3152cf85e2441b1f6e75150cc6fcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1742d4101b2e909654b62157da7b68fa00186b4f0ee0eaaff36e61b4b66438,PodSandboxId:9baf2c1d74d8fd80202b57393a1cff9381c1a573614ae7951c3d394a6f1ad1da,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176764451298959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3361171e0e0bb74952d2e5ceb4abf9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3989091a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804eb9eab47e5cc0cda9b131547f3205dedd6daf322bb11a55afe58785f3e6b7,PodSandboxId:ca9630971a3f62517e09b290d11196c2fb7bbf422b2dee9f5794c7190ad06ae7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176764075151378,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 504a8f5b62466e419fdb4fc4dbe4e6d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:050a6ef5633b023ad48bd34c5a4b766b48a9226dac24ace1c0a75c81def686fd,PodSandboxId:d07129c110eaa51dcf734671153ebd13f1538b215f44805e2e6f3d693c4ead05,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176759599774206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d291c533-2504-4d88-a868-aebd13fc2e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e43a08a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6085c3bb04b664b38068ba6fa846689cbd98bebc6d716429c8a0d972c63ad89,PodSandboxId:13f70319954b60aa491cf43b2f5b98d3918c6e28c27de0eed142ac9dd135375b,Metadata:&ContainerMetad
ata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721176722824105863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjbkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bdfe3f-8ffd-481b-957b-5614c55db709,},Annotations:map[string]string{io.kubernetes.container.hash: 1473a04b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38a2c8af50a4b2ca3f3874a937953bc2d4f9127253e1b8d2b7dbabb7ff755c3,PodSandboxId:25623fe79adb85b196a7087f806bdfd2ac1305fcd3b4ee6539b95287fa5fa8c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attem
pt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176722831570760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95fd0f7-52f6-4dfc-aaa2-cee480c08370,},Annotations:map[string]string{io.kubernetes.container.hash: 5dda6837,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27035f1ad13efd1b2a239015c9be60acafcdd8b9a2eba07f4d1eda8dce63b0e6,PodSandboxId:aa007c9888abc3b89308750d1164965fc58ca16fc2525e479af30acad4ab06b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Imag
e:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721176718976007236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3361171e0e0bb74952d2e5ceb4abf9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3989091a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19304579b202ca1ad9ac19a30589de69489748b38c0c369874fc47ec2e009ba3,PodSandboxId:22a99b9a3d26fc3b7ae33f5f6f4c7cea096bb84bcbff724369310ceb7b966338,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb
96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176718977278923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 504a8f5b62466e419fdb4fc4dbe4e6d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6626d24fd162d867d1047572fde888420f168a64a164d9c777eb8add6e073ddd,PodSandboxId:f63528fe9acd8c11df088a050bc3a33a71ca95295df48097379d818cac12c255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e
9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721176718952915756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3152cf85e2441b1f6e75150cc6fcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=137a53bd-10d7-4c61-90db-38a43b17496a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.310092494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=271969eb-261e-4c87-8844-e711eb4cf7c6 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.310164722Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=271969eb-261e-4c87-8844-e711eb4cf7c6 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.311445277Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83b29777-5cc2-4170-9b0a-d6b1d76c1c61 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.312294107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177007312268543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250732,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83b29777-5cc2-4170-9b0a-d6b1d76c1c61 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.312888057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea31e506-7a6e-4967-ab9a-67a4de4efd81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.312944996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea31e506-7a6e-4967-ab9a-67a4de4efd81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:43:27 functional-023523 crio[4976]: time="2024-07-17 00:43:27.313324323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9a606d5bb6109ebf52fd768b64f24be44e9bf7e39fd1211d356fd601a42bb72,PodSandboxId:d4670a074e59fb031e6c17958a10c2f11bf143ff8d30dfd8470d7dd7dae80125,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1721176902994895189,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-ckndh,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 816e6cd2-2ed7-497e-a215-9661100fb415,},Annotations:map[string]string{io.kubernetes.container.has
h: 3dbb7c68,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6576513c4a31dcaea18db3e48507d497d345680ba3d782ea1177a95d703216,PodSandboxId:c0208ee3e0dee180aae97ed02b8352c2313bf0447604d63f7b910a13a5b11b59,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1721176895964847704,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-fd6gh,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: f75595fa-b139-4964-9671-9abd33562d43,},Annotations:map[string]string{io.kubernetes.container.hash: a6f8f295,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:115c97017c755803c1eb4b329bd91de58af8b6f58fb9ecf2cb216ab2ba03bdca,PodSandboxId:8b6fdb77f0dccceed5017d744b3bb38ceb8c27d63b449cca838d64ae8d434f84,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1721176891809775025,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2305f04-cb6a-472a-828d-4da88721a51c,},Annotations:map[string]string{io.kubernetes.container.hash: 8c15aed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19e07db8dcac4db3a6cba643b818a3a06c1336ef768526708476b85ec78773c5,PodSandboxId:ec7dd7c34aba81194af28a607d9cba43d76ed4a59d64152155290d09873e1867,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1721176879039701246,Labels:map[string]string{io.kubern
etes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-cff5v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82159e3d-9613-465f-936c-7b8d0ab66aad,},Annotations:map[string]string{io.kubernetes.container.hash: e4a93296,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f24e81bbd8185613a4ed35760f00768d8e279bf5b0828ce3490b323feb0eefbb,PodSandboxId:bec280a555aa86bea2e01cc1865bb3b69fa37840bdf5e230245d2dea4457c5a9,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1721176878953982764,Labels:map[string]string{i
o.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-t5ssg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2f67a68e-2a30-4bcc-89c7-cd3c2059ac27,},Annotations:map[string]string{io.kubernetes.container.hash: 79d3bd47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0f3d584245e99add69cd983bedc10e9ed3a3a32c650232ee2f40ef8cc294c3e,PodSandboxId:41d184b7b75f9221f55c968f851418b0c093d39708bd1923c5b8755d16c2e24b,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1721176875189187841,Labels:map[string]
string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-l95r4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 160e20ce-943b-4253-9ed5-c7eced8fd388,},Annotations:map[string]string{io.kubernetes.container.hash: e8c28480,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1911e668601d42a4216fc8178cc05652e9b97ea735929e3f0fcc129c194bb95f,PodSandboxId:dfeff8a1adea2050e2759592f655d186d1733036497a91c793a5d476d129b84d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:
CONTAINER_RUNNING,CreatedAt:1721176797594944735,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a737f7de401d8e766eed8680117d019a,},Annotations:map[string]string{io.kubernetes.container.hash: 12562423,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc2516f66d8d07fcce188402ec254604a86851e265ccc82fab073fc9657fb4e8,PodSandboxId:636356dd1a6c5b6f6644668fd6d1734f0d6d47643192effd6222a22494517aff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721176772871270247,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjbkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bdfe3f-8ffd-481b-957b-5614c55db709,},Annotations:map[string]string{io.kubernetes.container.hash: 1473a04b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acdc589ecd79cc98d5f3cae8f57b47a03938cc7212671bb3e05f2cf37f28f792,PodSandboxId:a0f603dfea18d0566d0505b5d72f290badf9b4a9aa66368ed2b7e4b2829dbae2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721176772887
025146,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95fd0f7-52f6-4dfc-aaa2-cee480c08370,},Annotations:map[string]string{io.kubernetes.container.hash: 5dda6837,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3306368814bbe3d95dee6d3b3acf799a9c5da78df71175df3023d28c4d09a8dd,PodSandboxId:d07129c110eaa51dcf734671153ebd13f1538b215f44805e2e6f3d693c4ead05,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721176772864231234,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d291c533-2504-4d88-a868-aebd13fc2e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e43a08a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46722747ae61c016808f072be409082c74213644e7ce9053b90f17e4a4f4c05,PodSandboxId:dfeff8a1adea2050e2759592f655d186d1733036497a91c793a5d476d129b84d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721176772806479295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a737f7de401d8e766eed8680117d019a,},Annotations:map[string]string{io.kubernetes.container.hash: 12562423,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d7472a2bf1e530f31c5f3b38ba95281fe3834746b6b885997d7859fcadcad5d,PodSandboxId:f826269ceaebc0585f63d107be8021a0389be90f9ad1d3db0b3b617c47dc62c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721176767351150020,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3152cf85e2441b1f6e75150cc6fcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1742d4101b2e909654b62157da7b68fa00186b4f0ee0eaaff36e61b4b66438,PodSandboxId:9baf2c1d74d8fd80202b57393a1cff9381c1a573614ae7951c3d394a6f1ad1da,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721176764451298959,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3361171e0e0bb74952d2e5ceb4abf9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3989091a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804eb9eab47e5cc0cda9b131547f3205dedd6daf322bb11a55afe58785f3e6b7,PodSandboxId:ca9630971a3f62517e09b290d11196c2fb7bbf422b2dee9f5794c7190ad06ae7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721176764075151378,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 504a8f5b62466e419fdb4fc4dbe4e6d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:050a6ef5633b023ad48bd34c5a4b766b48a9226dac24ace1c0a75c81def686fd,PodSandboxId:d07129c110eaa51dcf734671153ebd13f1538b215f44805e2e6f3d693c4ead05,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721176759599774206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jmdcv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d291c533-2504-4d88-a868-aebd13fc2e0e,},Annotations:map[string]string{io.kubernetes.container.hash: 5e43a08a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6085c3bb04b664b38068ba6fa846689cbd98bebc6d716429c8a0d972c63ad89,PodSandboxId:13f70319954b60aa491cf43b2f5b98d3918c6e28c27de0eed142ac9dd135375b,Metadata:&ContainerMetad
ata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721176722824105863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gjbkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bdfe3f-8ffd-481b-957b-5614c55db709,},Annotations:map[string]string{io.kubernetes.container.hash: 1473a04b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d38a2c8af50a4b2ca3f3874a937953bc2d4f9127253e1b8d2b7dbabb7ff755c3,PodSandboxId:25623fe79adb85b196a7087f806bdfd2ac1305fcd3b4ee6539b95287fa5fa8c4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attem
pt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721176722831570760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a95fd0f7-52f6-4dfc-aaa2-cee480c08370,},Annotations:map[string]string{io.kubernetes.container.hash: 5dda6837,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27035f1ad13efd1b2a239015c9be60acafcdd8b9a2eba07f4d1eda8dce63b0e6,PodSandboxId:aa007c9888abc3b89308750d1164965fc58ca16fc2525e479af30acad4ab06b3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Imag
e:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721176718976007236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3361171e0e0bb74952d2e5ceb4abf9c2,},Annotations:map[string]string{io.kubernetes.container.hash: 3989091a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19304579b202ca1ad9ac19a30589de69489748b38c0c369874fc47ec2e009ba3,PodSandboxId:22a99b9a3d26fc3b7ae33f5f6f4c7cea096bb84bcbff724369310ceb7b966338,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb
96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721176718977278923,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 504a8f5b62466e419fdb4fc4dbe4e6d8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6626d24fd162d867d1047572fde888420f168a64a164d9c777eb8add6e073ddd,PodSandboxId:f63528fe9acd8c11df088a050bc3a33a71ca95295df48097379d818cac12c255,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e
9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721176718952915756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-023523,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f3152cf85e2441b1f6e75150cc6fcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea31e506-7a6e-4967-ab9a-67a4de4efd81 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	c9a606d5bb610       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   d4670a074e59f       kubernetes-dashboard-779776cb65-ckndh
	5e6576513c4a3       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   c0208ee3e0dee       dashboard-metrics-scraper-b5fc48f67-fd6gh
	115c97017c755       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              About a minute ago   Exited              mount-munger                0                   8b6fdb77f0dcc       busybox-mount
	19e07db8dcac4       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   ec7dd7c34aba8       hello-node-6d85cfcfd8-cff5v
	f24e81bbd8185       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   bec280a555aa8       hello-node-connect-57b4589c47-t5ssg
	f0f3d584245e9       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  2 minutes ago        Running             mysql                       0                   41d184b7b75f9       mysql-64454c8b5c-l95r4
	1911e668601d4       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                 3 minutes ago        Running             kube-apiserver              2                   dfeff8a1adea2       kube-apiserver-functional-023523
	acdc589ecd79c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         3                   a0f603dfea18d       storage-provisioner
	fc2516f66d8d0       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                 3 minutes ago        Running             kube-proxy                  3                   636356dd1a6c5       kube-proxy-gjbkv
	3306368814bbe       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 3 minutes ago        Running             coredns                     3                   d07129c110eaa       coredns-7db6d8ff4d-jmdcv
	f46722747ae61       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                 3 minutes ago        Exited              kube-apiserver              1                   dfeff8a1adea2       kube-apiserver-functional-023523
	1d7472a2bf1e5       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                 4 minutes ago        Running             kube-scheduler              3                   f826269ceaebc       kube-scheduler-functional-023523
	8b1742d4101b2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 4 minutes ago        Running             etcd                        3                   9baf2c1d74d8f       etcd-functional-023523
	804eb9eab47e5       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                 4 minutes ago        Running             kube-controller-manager     3                   ca9630971a3f6       kube-controller-manager-functional-023523
	050a6ef5633b0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 4 minutes ago        Exited              coredns                     2                   d07129c110eaa       coredns-7db6d8ff4d-jmdcv
	d38a2c8af50a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago        Exited              storage-provisioner         2                   25623fe79adb8       storage-provisioner
	e6085c3bb04b6       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                 4 minutes ago        Exited              kube-proxy                  2                   13f70319954b6       kube-proxy-gjbkv
	19304579b202c       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                 4 minutes ago        Exited              kube-controller-manager     2                   22a99b9a3d26f       kube-controller-manager-functional-023523
	27035f1ad13ef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 4 minutes ago        Exited              etcd                        2                   aa007c9888abc       etcd-functional-023523
	6626d24fd162d       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                 4 minutes ago        Exited              kube-scheduler              2                   f63528fe9acd8       kube-scheduler-functional-023523
	
	
	==> coredns [050a6ef5633b023ad48bd34c5a4b766b48a9226dac24ace1c0a75c81def686fd] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:33106 - 52605 "HINFO IN 1171328856404454296.4016743988622955149. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010674028s
	
	
	==> coredns [3306368814bbe3d95dee6d3b3acf799a9c5da78df71175df3023d28c4d09a8dd] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=514": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               functional-023523
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-023523
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=functional-023523
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_38_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:38:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-023523
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:43:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:42:04 +0000   Wed, 17 Jul 2024 00:38:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:42:04 +0000   Wed, 17 Jul 2024 00:38:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:42:04 +0000   Wed, 17 Jul 2024 00:38:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:42:04 +0000   Wed, 17 Jul 2024 00:39:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    functional-023523
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a97b80658a1749afa8926e7fd9865d95
	  System UUID:                a97b8065-8a17-49af-a892-6e7fd9865d95
	  Boot ID:                    30c0726d-9ff8-4371-b856-4f6d153a47f5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d85cfcfd8-cff5v                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  default                     hello-node-connect-57b4589c47-t5ssg          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  default                     mysql-64454c8b5c-l95r4                       600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    3m10s
	  kube-system                 coredns-7db6d8ff4d-jmdcv                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m6s
	  kube-system                 etcd-functional-023523                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-apiserver-functional-023523             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-controller-manager-functional-023523    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 kube-proxy-gjbkv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-scheduler-functional-023523             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-fd6gh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-ckndh        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%!)(MISSING)  700m (35%!)(MISSING)
	  memory             682Mi (17%!)(MISSING)  870Mi (22%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m54s                  kube-proxy       
	  Normal  Starting                 4m44s                  kube-proxy       
	  Normal  Starting                 5m4s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m25s (x9 over 5m26s)  kubelet          Node functional-023523 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m26s)  kubelet          Node functional-023523 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m25s (x7 over 5m26s)  kubelet          Node functional-023523 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m19s                  kubelet          Node functional-023523 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s                  kubelet          Node functional-023523 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s                  kubelet          Node functional-023523 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m18s                  kubelet          Node functional-023523 status is now: NodeReady
	  Normal  RegisteredNode           5m6s                   node-controller  Node functional-023523 event: Registered Node functional-023523 in Controller
	  Normal  Starting                 4m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m49s (x8 over 4m49s)  kubelet          Node functional-023523 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s (x8 over 4m49s)  kubelet          Node functional-023523 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s (x7 over 4m49s)  kubelet          Node functional-023523 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m32s                  node-controller  Node functional-023523 event: Registered Node functional-023523 in Controller
	  Normal  NodeHasSufficientMemory  3m56s                  kubelet          Node functional-023523 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m56s                  kubelet          Node functional-023523 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s                  kubelet          Node functional-023523 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m56s                  kubelet          Node functional-023523 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m56s                  kubelet          Node functional-023523 status is now: NodeReady
	  Normal  RegisteredNode           3m25s                  node-controller  Node functional-023523 event: Registered Node functional-023523 in Controller
	
	
	==> dmesg <==
	[  +0.351847] systemd-fstab-generator[2687]: Ignoring "noauto" option for root device
	[  +0.276227] systemd-fstab-generator[2772]: Ignoring "noauto" option for root device
	[  +0.365139] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +1.021220] systemd-fstab-generator[3133]: Ignoring "noauto" option for root device
	[  +2.356595] systemd-fstab-generator[3645]: Ignoring "noauto" option for root device
	[  +0.081891] kauditd_printk_skb: 259 callbacks suppressed
	[ +16.843541] kauditd_printk_skb: 51 callbacks suppressed
	[  +3.827821] systemd-fstab-generator[4086]: Ignoring "noauto" option for root device
	[Jul17 00:39] systemd-fstab-generator[4896]: Ignoring "noauto" option for root device
	[  +0.077427] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.061521] systemd-fstab-generator[4908]: Ignoring "noauto" option for root device
	[  +0.158191] systemd-fstab-generator[4922]: Ignoring "noauto" option for root device
	[  +0.129342] systemd-fstab-generator[4934]: Ignoring "noauto" option for root device
	[  +0.260564] systemd-fstab-generator[4962]: Ignoring "noauto" option for root device
	[  +1.318061] systemd-fstab-generator[5426]: Ignoring "noauto" option for root device
	[  +4.574120] kauditd_printk_skb: 182 callbacks suppressed
	[  +7.186297] systemd-fstab-generator[5787]: Ignoring "noauto" option for root device
	[  +0.085636] kauditd_printk_skb: 10 callbacks suppressed
	[  +2.380676] systemd-fstab-generator[6255]: Ignoring "noauto" option for root device
	[ +23.865272] kauditd_printk_skb: 80 callbacks suppressed
	[Jul17 00:40] kauditd_printk_skb: 2 callbacks suppressed
	[Jul17 00:41] kauditd_printk_skb: 26 callbacks suppressed
	[ +12.227315] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.103361] kauditd_printk_skb: 39 callbacks suppressed
	[  +9.467306] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [27035f1ad13efd1b2a239015c9be60acafcdd8b9a2eba07f4d1eda8dce63b0e6] <==
	{"level":"info","ts":"2024-07-17T00:38:40.472125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T00:38:40.472231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T00:38:40.472345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 received MsgPreVoteResp from 6c80de388e5020e8 at term 2"}
	{"level":"info","ts":"2024-07-17T00:38:40.472377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T00:38:40.472494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 received MsgVoteResp from 6c80de388e5020e8 at term 3"}
	{"level":"info","ts":"2024-07-17T00:38:40.472526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T00:38:40.472552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6c80de388e5020e8 elected leader 6c80de388e5020e8 at term 3"}
	{"level":"info","ts":"2024-07-17T00:38:40.477617Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6c80de388e5020e8","local-member-attributes":"{Name:functional-023523 ClientURLs:[https://192.168.39.2:2379]}","request-path":"/0/members/6c80de388e5020e8/attributes","cluster-id":"e20ba2e00cb0e827","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:38:40.477921Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:38:40.47808Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:38:40.478532Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T00:38:40.478562Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T00:38:40.479993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:38:40.482068Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.2:2379"}
	{"level":"info","ts":"2024-07-17T00:39:11.39659Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T00:39:11.396669Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-023523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"]}
	{"level":"warn","ts":"2024-07-17T00:39:11.39674Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:39:11.396889Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:39:11.406308Z","caller":"v3rpc/watch.go:473","msg":"failed to send watch response to gRPC stream","error":"rpc error: code = Unavailable desc = transport is closing"}
	{"level":"warn","ts":"2024-07-17T00:39:11.50404Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:39:11.504194Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T00:39:11.504284Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6c80de388e5020e8","current-leader-member-id":"6c80de388e5020e8"}
	{"level":"info","ts":"2024-07-17T00:39:11.507146Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.2:2380"}
	{"level":"info","ts":"2024-07-17T00:39:11.507314Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.2:2380"}
	{"level":"info","ts":"2024-07-17T00:39:11.507348Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-023523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"]}
	
	
	==> etcd [8b1742d4101b2e909654b62157da7b68fa00186b4f0ee0eaaff36e61b4b66438] <==
	{"level":"info","ts":"2024-07-17T00:41:10.366004Z","caller":"traceutil/trace.go:171","msg":"trace[1376184401] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; response_count:0; response_revision:735; }","duration":"112.338992ms","start":"2024-07-17T00:41:10.253654Z","end":"2024-07-17T00:41:10.365993Z","steps":["trace[1376184401] 'agreement among raft nodes before linearized reading'  (duration: 112.280479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:41:10.36615Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.803022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10820"}
	{"level":"info","ts":"2024-07-17T00:41:10.366165Z","caller":"traceutil/trace.go:171","msg":"trace[807092599] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:735; }","duration":"179.840973ms","start":"2024-07-17T00:41:10.18632Z","end":"2024-07-17T00:41:10.366161Z","steps":["trace[807092599] 'agreement among raft nodes before linearized reading'  (duration: 179.761106ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:41:13.120125Z","caller":"traceutil/trace.go:171","msg":"trace[1375783631] transaction","detail":"{read_only:false; response_revision:738; number_of_response:1; }","duration":"744.528874ms","start":"2024-07-17T00:41:12.375574Z","end":"2024-07-17T00:41:13.120103Z","steps":["trace[1375783631] 'process raft request'  (duration: 744.296632ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:41:13.120294Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:41:12.375554Z","time spent":"744.633078ms","remote":"127.0.0.1:36838","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:735 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-07-17T00:41:13.120654Z","caller":"traceutil/trace.go:171","msg":"trace[2063437917] linearizableReadLoop","detail":"{readStateIndex:819; appliedIndex:818; }","duration":"161.538732ms","start":"2024-07-17T00:41:12.959105Z","end":"2024-07-17T00:41:13.120644Z","steps":["trace[2063437917] 'read index received'  (duration: 161.441798ms)","trace[2063437917] 'applied index is now lower than readState.Index'  (duration: 94.587µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:41:13.120834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.720956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10820"}
	{"level":"info","ts":"2024-07-17T00:41:13.120859Z","caller":"traceutil/trace.go:171","msg":"trace[1711188845] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:739; }","duration":"161.772763ms","start":"2024-07-17T00:41:12.959079Z","end":"2024-07-17T00:41:13.120852Z","steps":["trace[1711188845] 'agreement among raft nodes before linearized reading'  (duration: 161.618346ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:41:13.121077Z","caller":"traceutil/trace.go:171","msg":"trace[1108066468] transaction","detail":"{read_only:false; response_revision:739; number_of_response:1; }","duration":"269.244205ms","start":"2024-07-17T00:41:12.85182Z","end":"2024-07-17T00:41:13.121064Z","steps":["trace[1108066468] 'process raft request'  (duration: 268.750192ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:41:20.017671Z","caller":"traceutil/trace.go:171","msg":"trace[1574525533] transaction","detail":"{read_only:false; response_revision:761; number_of_response:1; }","duration":"502.911595ms","start":"2024-07-17T00:41:19.514745Z","end":"2024-07-17T00:41:20.017657Z","steps":["trace[1574525533] 'process raft request'  (duration: 501.525662ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:41:20.017796Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:41:19.514728Z","time spent":"503.002801ms","remote":"127.0.0.1:37102","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2080,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/default/hello-node-connect\" mod_revision:645 > success:<request_put:<key:\"/registry/deployments/default/hello-node-connect\" value_size:2024 >> failure:<request_range:<key:\"/registry/deployments/default/hello-node-connect\" > >"}
	{"level":"info","ts":"2024-07-17T00:41:20.023342Z","caller":"traceutil/trace.go:171","msg":"trace[1348841073] transaction","detail":"{read_only:false; response_revision:762; number_of_response:1; }","duration":"494.564409ms","start":"2024-07-17T00:41:19.528764Z","end":"2024-07-17T00:41:20.023328Z","steps":["trace[1348841073] 'process raft request'  (duration: 494.30999ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:41:20.023489Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:41:19.528713Z","time spent":"494.730179ms","remote":"127.0.0.1:36862","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2712,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/hello-node-6d85cfcfd8-cff5v\" mod_revision:722 > success:<request_put:<key:\"/registry/pods/default/hello-node-6d85cfcfd8-cff5v\" value_size:2654 >> failure:<request_range:<key:\"/registry/pods/default/hello-node-6d85cfcfd8-cff5v\" > >"}
	{"level":"info","ts":"2024-07-17T00:41:23.345906Z","caller":"traceutil/trace.go:171","msg":"trace[1461578767] transaction","detail":"{read_only:false; response_revision:770; number_of_response:1; }","duration":"142.673652ms","start":"2024-07-17T00:41:23.203213Z","end":"2024-07-17T00:41:23.345887Z","steps":["trace[1461578767] 'process raft request'  (duration: 142.330871ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:41:41.898013Z","caller":"traceutil/trace.go:171","msg":"trace[2026275669] linearizableReadLoop","detail":"{readStateIndex:950; appliedIndex:949; }","duration":"368.233247ms","start":"2024-07-17T00:41:41.529759Z","end":"2024-07-17T00:41:41.897992Z","steps":["trace[2026275669] 'read index received'  (duration: 368.113992ms)","trace[2026275669] 'applied index is now lower than readState.Index'  (duration: 118.93µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:41:41.898175Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.399014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:698"}
	{"level":"info","ts":"2024-07-17T00:41:41.898203Z","caller":"traceutil/trace.go:171","msg":"trace[2024046965] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:864; }","duration":"368.458687ms","start":"2024-07-17T00:41:41.529736Z","end":"2024-07-17T00:41:41.898194Z","steps":["trace[2024046965] 'agreement among raft nodes before linearized reading'  (duration: 368.342131ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:41:41.898231Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:41:41.529723Z","time spent":"368.499902ms","remote":"127.0.0.1:36838","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":1,"response size":721,"request content":"key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" "}
	{"level":"info","ts":"2024-07-17T00:41:41.89857Z","caller":"traceutil/trace.go:171","msg":"trace[413837292] transaction","detail":"{read_only:false; response_revision:864; number_of_response:1; }","duration":"449.754708ms","start":"2024-07-17T00:41:41.448807Z","end":"2024-07-17T00:41:41.898561Z","steps":["trace[413837292] 'process raft request'  (duration: 449.105015ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:41:41.898658Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:41:41.44879Z","time spent":"449.824822ms","remote":"127.0.0.1:36838","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:861 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-07-17T00:41:48.26875Z","caller":"traceutil/trace.go:171","msg":"trace[1570835815] linearizableReadLoop","detail":"{readStateIndex:965; appliedIndex:964; }","duration":"127.409924ms","start":"2024-07-17T00:41:48.141318Z","end":"2024-07-17T00:41:48.268728Z","steps":["trace[1570835815] 'read index received'  (duration: 127.237677ms)","trace[1570835815] 'applied index is now lower than readState.Index'  (duration: 171.466µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:41:48.269027Z","caller":"traceutil/trace.go:171","msg":"trace[1259291054] transaction","detail":"{read_only:false; response_revision:878; number_of_response:1; }","duration":"343.076119ms","start":"2024-07-17T00:41:47.925927Z","end":"2024-07-17T00:41:48.269003Z","steps":["trace[1259291054] 'process raft request'  (duration: 342.653667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:41:48.269161Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:41:47.925913Z","time spent":"343.173698ms","remote":"127.0.0.1:36838","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:877 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-17T00:41:48.270352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.927915ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14181"}
	{"level":"info","ts":"2024-07-17T00:41:48.270582Z","caller":"traceutil/trace.go:171","msg":"trace[335773703] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:878; }","duration":"129.301086ms","start":"2024-07-17T00:41:48.141268Z","end":"2024-07-17T00:41:48.270569Z","steps":["trace[335773703] 'agreement among raft nodes before linearized reading'  (duration: 127.658366ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:43:27 up 5 min,  0 users,  load average: 0.23, 0.63, 0.35
	Linux functional-023523 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1911e668601d42a4216fc8178cc05652e9b97ea735929e3f0fcc129c194bb95f] <==
	I0717 00:40:17.156494       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 00:40:19.182060       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.194.205"}
	I0717 00:40:25.950185       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.144.244"}
	I0717 00:41:13.124847       1 trace.go:236] Trace[1275988062]: "Update" accept:application/json, */*,audit-id:99e07cde-036b-4a7f-b025-51af44ffa27b,client:192.168.39.2,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (17-Jul-2024 00:41:12.373) (total time: 750ms):
	Trace[1275988062]: ["GuaranteedUpdate etcd3" audit-id:99e07cde-036b-4a7f-b025-51af44ffa27b,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 750ms (00:41:12.374)
	Trace[1275988062]:  ---"Txn call completed" 749ms (00:41:13.124)]
	Trace[1275988062]: [750.883614ms] [750.883614ms] END
	I0717 00:41:20.018820       1 trace.go:236] Trace[1432714753]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:7db73a1b-0d59-48a3-a7e0-ea5bc3c558a4,client:192.168.39.2,api-group:apps,api-version:v1,name:hello-node-connect,subresource:status,namespace:default,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/default/deployments/hello-node-connect/status,user-agent:kube-controller-manager/v1.30.2 (linux/amd64) kubernetes/3968350/system:serviceaccount:kube-system:deployment-controller,verb:PUT (17-Jul-2024 00:41:19.512) (total time: 506ms):
	Trace[1432714753]: ["GuaranteedUpdate etcd3" audit-id:7db73a1b-0d59-48a3-a7e0-ea5bc3c558a4,key:/deployments/default/hello-node-connect,type:*apps.Deployment,resource:deployments.apps 505ms (00:41:19.512)
	Trace[1432714753]:  ---"Txn call completed" 504ms (00:41:20.018)]
	Trace[1432714753]: [506.086825ms] [506.086825ms] END
	I0717 00:41:20.023965       1 trace.go:236] Trace[1964761751]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:21d71a83-25ad-49da-becc-168db6a4b485,client:192.168.39.2,api-group:,api-version:v1,name:hello-node-6d85cfcfd8-cff5v,subresource:status,namespace:default,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/default/pods/hello-node-6d85cfcfd8-cff5v/status,user-agent:kubelet/v1.30.2 (linux/amd64) kubernetes/3968350,verb:PATCH (17-Jul-2024 00:41:19.512) (total time: 511ms):
	Trace[1964761751]: ["GuaranteedUpdate etcd3" audit-id:21d71a83-25ad-49da-becc-168db6a4b485,key:/pods/default/hello-node-6d85cfcfd8-cff5v,type:*core.Pod,resource:pods 511ms (00:41:19.512)
	Trace[1964761751]:  ---"Txn call completed" 507ms (00:41:20.023)]
	Trace[1964761751]: ---"Object stored in database" 508ms (00:41:20.023)
	Trace[1964761751]: [511.875563ms] [511.875563ms] END
	E0717 00:41:22.287694       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8441->192.168.39.1:34368: use of closed network connection
	E0717 00:41:23.470657       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8441->192.168.39.1:34376: use of closed network connection
	E0717 00:41:25.691637       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8441->192.168.39.1:34414: use of closed network connection
	I0717 00:41:28.416059       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 00:41:28.442955       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 00:41:28.531550       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 00:41:28.575059       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 00:41:28.787794       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.131.179"}
	I0717 00:41:28.823048       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.206.86"}
	
	
	==> kube-apiserver [f46722747ae61c016808f072be409082c74213644e7ce9053b90f17e4a4f4c05] <==
	I0717 00:39:33.161009       1 options.go:221] external host was not specified, using 192.168.39.2
	I0717 00:39:33.162007       1 server.go:148] Version: v1.30.2
	I0717 00:39:33.162047       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0717 00:39:33.162488       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [19304579b202ca1ad9ac19a30589de69489748b38c0c369874fc47ec2e009ba3] <==
	I0717 00:38:55.069355       1 shared_informer.go:320] Caches are synced for job
	I0717 00:38:55.073502       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 00:38:55.074496       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0717 00:38:55.076186       1 shared_informer.go:320] Caches are synced for taint
	I0717 00:38:55.076795       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0717 00:38:55.077244       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-023523"
	I0717 00:38:55.077325       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0717 00:38:55.078268       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 00:38:55.080734       1 shared_informer.go:320] Caches are synced for endpoint
	I0717 00:38:55.100350       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 00:38:55.105520       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0717 00:38:55.105617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.435µs"
	I0717 00:38:55.111755       1 shared_informer.go:320] Caches are synced for disruption
	I0717 00:38:55.115060       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0717 00:38:55.115127       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 00:38:55.115250       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 00:38:55.126947       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0717 00:38:55.129271       1 shared_informer.go:320] Caches are synced for GC
	I0717 00:38:55.154212       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:38:55.183430       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 00:38:55.183639       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:38:55.215601       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0717 00:38:55.606153       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:38:55.664479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:38:55.664562       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [804eb9eab47e5cc0cda9b131547f3205dedd6daf322bb11a55afe58785f3e6b7] <==
	I0717 00:41:19.512075       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="36.966µs"
	I0717 00:41:20.031282       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="5.893803ms"
	I0717 00:41:20.032595       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="213.199µs"
	I0717 00:41:28.573757       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="75.577543ms"
	E0717 00:41:28.573811       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0717 00:41:28.581705       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="49.687487ms"
	E0717 00:41:28.581750       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0717 00:41:28.592619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="18.738196ms"
	E0717 00:41:28.592664       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0717 00:41:28.598668       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="16.759647ms"
	E0717 00:41:28.598717       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0717 00:41:28.609285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="16.588536ms"
	E0717 00:41:28.609347       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0717 00:41:28.612758       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="14.01615ms"
	E0717 00:41:28.612812       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0717 00:41:28.662469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="53.087799ms"
	I0717 00:41:28.708024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="95.185385ms"
	I0717 00:41:28.744722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="36.640153ms"
	I0717 00:41:28.745317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="215.174µs"
	I0717 00:41:28.755536       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="92.998781ms"
	I0717 00:41:28.755610       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="32.603µs"
	I0717 00:41:36.593246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="12.274502ms"
	I0717 00:41:36.594982       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="118.023µs"
	I0717 00:41:43.629588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="12.694373ms"
	I0717 00:41:43.629732       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="73.954µs"
	
	
	==> kube-proxy [e6085c3bb04b664b38068ba6fa846689cbd98bebc6d716429c8a0d972c63ad89] <==
	I0717 00:38:43.025347       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:38:43.034711       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.2"]
	I0717 00:38:43.072480       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:38:43.072527       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:38:43.072543       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:38:43.079542       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:38:43.079717       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:38:43.079751       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:38:43.083847       1 config.go:192] "Starting service config controller"
	I0717 00:38:43.083876       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:38:43.083898       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:38:43.083908       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:38:43.084269       1 config.go:319] "Starting node config controller"
	I0717 00:38:43.084299       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:38:43.184959       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:38:43.185067       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:38:43.185086       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [fc2516f66d8d07fcce188402ec254604a86851e265ccc82fab073fc9657fb4e8] <==
	W0717 00:39:33.652051       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
	W0717 00:39:34.668674       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:34.668733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:34.777708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:34.777770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:35.196151       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-023523&resourceVersion=531": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:35.196209       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-023523&resourceVersion=531": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:37.364301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:37.364420       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:37.392917       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:37.393016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:37.480765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-023523&resourceVersion=531": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:37.480830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-023523&resourceVersion=531": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:42.589618       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:42.589686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:42.833684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-023523&resourceVersion=531": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:42.833725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-023523&resourceVersion=531": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:43.425095       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:43.425159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:49.449967       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-023523&resourceVersion=531": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:49.450029       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-023523&resourceVersion=531": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:53.374488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:53.374554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	W0717 00:39:55.153183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	E0717 00:39:55.153245       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=514": dial tcp 192.168.39.2:8441: connect: connection refused
	
	
	==> kube-scheduler [1d7472a2bf1e530f31c5f3b38ba95281fe3834746b6b885997d7859fcadcad5d] <==
	W0717 00:39:29.792828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:39:29.792855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 00:39:29.792904       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:39:29.792932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:39:29.793203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:39:29.793232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:39:29.793457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:39:29.795502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:39:29.795751       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:39:29.795845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:39:29.795980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:39:29.796082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:39:29.796285       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:39:29.796318       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:39:29.796548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:39:29.796581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:39:29.796707       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:39:29.796802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:39:29.797000       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:39:29.799479       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:39:29.799576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:39:29.799611       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:39:29.799518       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:39:29.799724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0717 00:39:30.745919       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6626d24fd162d867d1047572fde888420f168a64a164d9c777eb8add6e073ddd] <==
	I0717 00:38:39.720731       1 serving.go:380] Generated self-signed cert in-memory
	W0717 00:38:41.757807       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 00:38:41.757851       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:38:41.757863       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 00:38:41.757869       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 00:38:41.785891       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 00:38:41.785930       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:38:41.787719       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 00:38:41.787855       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 00:38:41.787886       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 00:38:41.788226       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 00:38:41.888443       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 00:39:11.403018       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 00:41:27 functional-023523 kubelet[5794]: I0717 00:41:27.216618    5794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e2305f04-cb6a-472a-828d-4da88721a51c-test-volume\") pod \"busybox-mount\" (UID: \"e2305f04-cb6a-472a-828d-4da88721a51c\") " pod="default/busybox-mount"
	Jul 17 00:41:28 functional-023523 kubelet[5794]: I0717 00:41:28.643527    5794 topology_manager.go:215] "Topology Admit Handler" podUID="f75595fa-b139-4964-9671-9abd33562d43" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-fd6gh"
	Jul 17 00:41:28 functional-023523 kubelet[5794]: I0717 00:41:28.677883    5794 topology_manager.go:215] "Topology Admit Handler" podUID="816e6cd2-2ed7-497e-a215-9661100fb415" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-ckndh"
	Jul 17 00:41:28 functional-023523 kubelet[5794]: I0717 00:41:28.730749    5794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmfm\" (UniqueName: \"kubernetes.io/projected/816e6cd2-2ed7-497e-a215-9661100fb415-kube-api-access-dcmfm\") pod \"kubernetes-dashboard-779776cb65-ckndh\" (UID: \"816e6cd2-2ed7-497e-a215-9661100fb415\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-ckndh"
	Jul 17 00:41:28 functional-023523 kubelet[5794]: I0717 00:41:28.730986    5794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s7kj\" (UniqueName: \"kubernetes.io/projected/f75595fa-b139-4964-9671-9abd33562d43-kube-api-access-9s7kj\") pod \"dashboard-metrics-scraper-b5fc48f67-fd6gh\" (UID: \"f75595fa-b139-4964-9671-9abd33562d43\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-fd6gh"
	Jul 17 00:41:28 functional-023523 kubelet[5794]: I0717 00:41:28.731099    5794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f75595fa-b139-4964-9671-9abd33562d43-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-fd6gh\" (UID: \"f75595fa-b139-4964-9671-9abd33562d43\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-fd6gh"
	Jul 17 00:41:28 functional-023523 kubelet[5794]: I0717 00:41:28.731294    5794 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/816e6cd2-2ed7-497e-a215-9661100fb415-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-ckndh\" (UID: \"816e6cd2-2ed7-497e-a215-9661100fb415\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-ckndh"
	Jul 17 00:41:31 functional-023523 kubelet[5794]: E0717 00:41:31.662269    5794 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:41:31 functional-023523 kubelet[5794]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:41:31 functional-023523 kubelet[5794]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:41:31 functional-023523 kubelet[5794]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:41:31 functional-023523 kubelet[5794]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:41:33 functional-023523 kubelet[5794]: I0717 00:41:33.768067    5794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwh86\" (UniqueName: \"kubernetes.io/projected/e2305f04-cb6a-472a-828d-4da88721a51c-kube-api-access-xwh86\") pod \"e2305f04-cb6a-472a-828d-4da88721a51c\" (UID: \"e2305f04-cb6a-472a-828d-4da88721a51c\") "
	Jul 17 00:41:33 functional-023523 kubelet[5794]: I0717 00:41:33.768111    5794 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e2305f04-cb6a-472a-828d-4da88721a51c-test-volume\") pod \"e2305f04-cb6a-472a-828d-4da88721a51c\" (UID: \"e2305f04-cb6a-472a-828d-4da88721a51c\") "
	Jul 17 00:41:33 functional-023523 kubelet[5794]: I0717 00:41:33.768256    5794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2305f04-cb6a-472a-828d-4da88721a51c-test-volume" (OuterVolumeSpecName: "test-volume") pod "e2305f04-cb6a-472a-828d-4da88721a51c" (UID: "e2305f04-cb6a-472a-828d-4da88721a51c"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 17 00:41:33 functional-023523 kubelet[5794]: I0717 00:41:33.770621    5794 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2305f04-cb6a-472a-828d-4da88721a51c-kube-api-access-xwh86" (OuterVolumeSpecName: "kube-api-access-xwh86") pod "e2305f04-cb6a-472a-828d-4da88721a51c" (UID: "e2305f04-cb6a-472a-828d-4da88721a51c"). InnerVolumeSpecName "kube-api-access-xwh86". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:41:33 functional-023523 kubelet[5794]: I0717 00:41:33.868571    5794 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xwh86\" (UniqueName: \"kubernetes.io/projected/e2305f04-cb6a-472a-828d-4da88721a51c-kube-api-access-xwh86\") on node \"functional-023523\" DevicePath \"\""
	Jul 17 00:41:33 functional-023523 kubelet[5794]: I0717 00:41:33.868631    5794 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e2305f04-cb6a-472a-828d-4da88721a51c-test-volume\") on node \"functional-023523\" DevicePath \"\""
	Jul 17 00:41:34 functional-023523 kubelet[5794]: I0717 00:41:34.552469    5794 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b6fdb77f0dccceed5017d744b3bb38ceb8c27d63b449cca838d64ae8d434f84"
	Jul 17 00:41:43 functional-023523 kubelet[5794]: I0717 00:41:43.616507    5794 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-fd6gh" podStartSLOduration=8.946646885 podStartE2EDuration="15.616492288s" podCreationTimestamp="2024-07-17 00:41:28 +0000 UTC" firstStartedPulling="2024-07-17 00:41:29.280173977 +0000 UTC m=+117.868946942" lastFinishedPulling="2024-07-17 00:41:35.95001939 +0000 UTC m=+124.538792345" observedRunningTime="2024-07-17 00:41:36.579685899 +0000 UTC m=+125.168458874" watchObservedRunningTime="2024-07-17 00:41:43.616492288 +0000 UTC m=+132.205265283"
	Jul 17 00:42:31 functional-023523 kubelet[5794]: E0717 00:42:31.645276    5794 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:42:31 functional-023523 kubelet[5794]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:42:31 functional-023523 kubelet[5794]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:42:31 functional-023523 kubelet[5794]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:42:31 functional-023523 kubelet[5794]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> kubernetes-dashboard [c9a606d5bb6109ebf52fd768b64f24be44e9bf7e39fd1211d356fd601a42bb72] <==
	2024/07/17 00:41:43 Starting overwatch
	2024/07/17 00:41:43 Using namespace: kubernetes-dashboard
	2024/07/17 00:41:43 Using in-cluster config to connect to apiserver
	2024/07/17 00:41:43 Using secret token for csrf signing
	2024/07/17 00:41:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/17 00:41:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/17 00:41:43 Successful initial request to the apiserver, version: v1.30.2
	2024/07/17 00:41:43 Generating JWE encryption key
	2024/07/17 00:41:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/17 00:41:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/17 00:41:43 Initializing JWE encryption key from synchronized object
	2024/07/17 00:41:43 Creating in-cluster Sidecar client
	2024/07/17 00:41:43 Successful request to sidecar
	2024/07/17 00:41:43 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [acdc589ecd79cc98d5f3cae8f57b47a03938cc7212671bb3e05f2cf37f28f792] <==
	I0717 00:39:33.172045       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:39:33.202985       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:39:33.203300       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0717 00:39:36.659587       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0717 00:39:40.918458       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0717 00:39:44.513918       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0717 00:39:47.565804       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0717 00:39:50.585493       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0717 00:39:54.235635       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0717 00:39:56.394524       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0717 00:39:59.552255       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:39:59.552686       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-023523_26c6ec05-cbca-4cf1-ad63-9586a9d8ab61!
	I0717 00:39:59.553493       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1df2a85b-a509-4a69-8fc3-f00018e468cb", APIVersion:"v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-023523_26c6ec05-cbca-4cf1-ad63-9586a9d8ab61 became leader
	I0717 00:39:59.653818       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-023523_26c6ec05-cbca-4cf1-ad63-9586a9d8ab61!
	I0717 00:40:23.656504       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0717 00:40:23.659285       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"64c49e7f-d009-4750-9a6f-592d6dc98eeb", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0717 00:40:23.656717       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    54c61d70-e0d2-4681-b816-c8c37a16893a 380 0 2024-07-17 00:38:22 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-17 00:38:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-64c49e7f-d009-4750-9a6f-592d6dc98eeb &PersistentVolumeClaim{ObjectMeta:{myclaim  default  64c49e7f-d009-4750-9a6f-592d6dc98eeb 657 0 2024-07-17 00:40:23 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-17 00:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-17 00:40:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0717 00:40:23.660347       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-64c49e7f-d009-4750-9a6f-592d6dc98eeb" provisioned
	I0717 00:40:23.660483       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0717 00:40:23.660498       1 volume_store.go:212] Trying to save persistentvolume "pvc-64c49e7f-d009-4750-9a6f-592d6dc98eeb"
	I0717 00:40:23.719476       1 volume_store.go:219] persistentvolume "pvc-64c49e7f-d009-4750-9a6f-592d6dc98eeb" saved
	I0717 00:40:23.721890       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"64c49e7f-d009-4750-9a6f-592d6dc98eeb", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-64c49e7f-d009-4750-9a6f-592d6dc98eeb
	
	
	==> storage-provisioner [d38a2c8af50a4b2ca3f3874a937953bc2d4f9127253e1b8d2b7dbabb7ff755c3] <==
	I0717 00:38:42.956622       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:38:42.976272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:38:42.976337       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:39:00.378295       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:39:00.378488       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-023523_5c2d7644-0a8a-49fb-a331-317b840a5ca7!
	I0717 00:39:00.379067       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1df2a85b-a509-4a69-8fc3-f00018e468cb", APIVersion:"v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-023523_5c2d7644-0a8a-49fb-a331-317b840a5ca7 became leader
	I0717 00:39:00.479473       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-023523_5c2d7644-0a8a-49fb-a331-317b840a5ca7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-023523 -n functional-023523
helpers_test.go:261: (dbg) Run:  kubectl --context functional-023523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-023523 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-023523 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-023523/192.168.39.2
	Start Time:       Wed, 17 Jul 2024 00:41:27 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://115c97017c755803c1eb4b329bd91de58af8b6f58fb9ecf2cb216ab2ba03bdca
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Jul 2024 00:41:31 +0000
	      Finished:     Wed, 17 Jul 2024 00:41:31 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwh86 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xwh86:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2m1s  default-scheduler  Successfully assigned default/busybox-mount to functional-023523
	  Normal  Pulling    2m1s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     117s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 4.034s (4.035s including waiting). Image size: 4631262 bytes.
	  Normal  Created    117s  kubelet            Created container mount-munger
	  Normal  Started    117s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        docker.io/nginx
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4mndx (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-4mndx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  2m33s (x2 over 2m35s)  default-scheduler  0/1 nodes are available: persistentvolume "pvc-64c49e7f-d009-4750-9a6f-592d6dc98eeb" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  2m23s                  default-scheduler  0/1 nodes are available: 1 node(s) unavailable due to one or more pvc(s) bound to non-existent pv(s). preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (190.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 node stop m02 -v=7 --alsologtostderr
E0717 00:50:17.179772   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:50:44.866891   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.452712394s)

                                                
                                                
-- stdout --
	* Stopping node "ha-029113-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:49:45.413126   27820 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:49:45.413423   27820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:49:45.413436   27820 out.go:304] Setting ErrFile to fd 2...
	I0717 00:49:45.413443   27820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:49:45.413689   27820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:49:45.413955   27820 mustload.go:65] Loading cluster: ha-029113
	I0717 00:49:45.414319   27820 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:49:45.414337   27820 stop.go:39] StopHost: ha-029113-m02
	I0717 00:49:45.414830   27820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:49:45.414882   27820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:49:45.430234   27820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40987
	I0717 00:49:45.430745   27820 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:49:45.431350   27820 main.go:141] libmachine: Using API Version  1
	I0717 00:49:45.431379   27820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:49:45.431725   27820 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:49:45.434197   27820 out.go:177] * Stopping node "ha-029113-m02"  ...
	I0717 00:49:45.435539   27820 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 00:49:45.435583   27820 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:49:45.435774   27820 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 00:49:45.435809   27820 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:49:45.438528   27820 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:49:45.438963   27820 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:49:45.439002   27820 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:49:45.439163   27820 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:49:45.439326   27820 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:49:45.439471   27820 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:49:45.439615   27820 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	I0717 00:49:45.530543   27820 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 00:49:45.584221   27820 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 00:49:45.637929   27820 main.go:141] libmachine: Stopping "ha-029113-m02"...
	I0717 00:49:45.637954   27820 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:49:45.639418   27820 main.go:141] libmachine: (ha-029113-m02) Calling .Stop
	I0717 00:49:45.642957   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 0/120
	I0717 00:49:46.644801   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 1/120
	I0717 00:49:47.646676   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 2/120
	I0717 00:49:48.649000   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 3/120
	I0717 00:49:49.650600   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 4/120
	I0717 00:49:50.652175   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 5/120
	I0717 00:49:51.653455   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 6/120
	I0717 00:49:52.654591   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 7/120
	I0717 00:49:53.655909   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 8/120
	I0717 00:49:54.656992   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 9/120
	I0717 00:49:55.659179   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 10/120
	I0717 00:49:56.660365   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 11/120
	I0717 00:49:57.661480   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 12/120
	I0717 00:49:58.663127   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 13/120
	I0717 00:49:59.664237   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 14/120
	I0717 00:50:00.665849   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 15/120
	I0717 00:50:01.667080   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 16/120
	I0717 00:50:02.668790   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 17/120
	I0717 00:50:03.670601   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 18/120
	I0717 00:50:04.671922   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 19/120
	I0717 00:50:05.674001   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 20/120
	I0717 00:50:06.675842   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 21/120
	I0717 00:50:07.677055   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 22/120
	I0717 00:50:08.678365   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 23/120
	I0717 00:50:09.680561   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 24/120
	I0717 00:50:10.682375   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 25/120
	I0717 00:50:11.683863   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 26/120
	I0717 00:50:12.685183   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 27/120
	I0717 00:50:13.686459   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 28/120
	I0717 00:50:14.687758   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 29/120
	I0717 00:50:15.689004   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 30/120
	I0717 00:50:16.690262   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 31/120
	I0717 00:50:17.691745   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 32/120
	I0717 00:50:18.693179   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 33/120
	I0717 00:50:19.694594   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 34/120
	I0717 00:50:20.696545   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 35/120
	I0717 00:50:21.698011   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 36/120
	I0717 00:50:22.699285   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 37/120
	I0717 00:50:23.701073   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 38/120
	I0717 00:50:24.702271   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 39/120
	I0717 00:50:25.704153   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 40/120
	I0717 00:50:26.705444   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 41/120
	I0717 00:50:27.707088   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 42/120
	I0717 00:50:28.708816   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 43/120
	I0717 00:50:29.710149   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 44/120
	I0717 00:50:30.711726   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 45/120
	I0717 00:50:31.712983   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 46/120
	I0717 00:50:32.715107   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 47/120
	I0717 00:50:33.717065   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 48/120
	I0717 00:50:34.718281   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 49/120
	I0717 00:50:35.720101   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 50/120
	I0717 00:50:36.721384   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 51/120
	I0717 00:50:37.723022   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 52/120
	I0717 00:50:38.724811   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 53/120
	I0717 00:50:39.726591   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 54/120
	I0717 00:50:40.728152   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 55/120
	I0717 00:50:41.729388   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 56/120
	I0717 00:50:42.730995   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 57/120
	I0717 00:50:43.733223   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 58/120
	I0717 00:50:44.734562   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 59/120
	I0717 00:50:45.736734   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 60/120
	I0717 00:50:46.737962   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 61/120
	I0717 00:50:47.739202   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 62/120
	I0717 00:50:48.740903   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 63/120
	I0717 00:50:49.742136   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 64/120
	I0717 00:50:50.743773   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 65/120
	I0717 00:50:51.745158   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 66/120
	I0717 00:50:52.746618   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 67/120
	I0717 00:50:53.747954   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 68/120
	I0717 00:50:54.749187   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 69/120
	I0717 00:50:55.751103   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 70/120
	I0717 00:50:56.752882   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 71/120
	I0717 00:50:57.754026   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 72/120
	I0717 00:50:58.755281   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 73/120
	I0717 00:50:59.756462   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 74/120
	I0717 00:51:00.758219   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 75/120
	I0717 00:51:01.759578   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 76/120
	I0717 00:51:02.760955   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 77/120
	I0717 00:51:03.762116   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 78/120
	I0717 00:51:04.763465   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 79/120
	I0717 00:51:05.765576   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 80/120
	I0717 00:51:06.767078   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 81/120
	I0717 00:51:07.769022   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 82/120
	I0717 00:51:08.770364   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 83/120
	I0717 00:51:09.771512   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 84/120
	I0717 00:51:10.773189   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 85/120
	I0717 00:51:11.774602   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 86/120
	I0717 00:51:12.775868   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 87/120
	I0717 00:51:13.777502   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 88/120
	I0717 00:51:14.778827   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 89/120
	I0717 00:51:15.781071   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 90/120
	I0717 00:51:16.782338   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 91/120
	I0717 00:51:17.783571   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 92/120
	I0717 00:51:18.785093   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 93/120
	I0717 00:51:19.786465   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 94/120
	I0717 00:51:20.788193   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 95/120
	I0717 00:51:21.790122   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 96/120
	I0717 00:51:22.791378   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 97/120
	I0717 00:51:23.792684   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 98/120
	I0717 00:51:24.793954   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 99/120
	I0717 00:51:25.796068   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 100/120
	I0717 00:51:26.797387   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 101/120
	I0717 00:51:27.798731   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 102/120
	I0717 00:51:28.800047   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 103/120
	I0717 00:51:29.801511   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 104/120
	I0717 00:51:30.803015   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 105/120
	I0717 00:51:31.804226   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 106/120
	I0717 00:51:32.805570   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 107/120
	I0717 00:51:33.807119   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 108/120
	I0717 00:51:34.809057   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 109/120
	I0717 00:51:35.810749   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 110/120
	I0717 00:51:36.811950   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 111/120
	I0717 00:51:37.813446   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 112/120
	I0717 00:51:38.815224   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 113/120
	I0717 00:51:39.816411   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 114/120
	I0717 00:51:40.818333   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 115/120
	I0717 00:51:41.820319   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 116/120
	I0717 00:51:42.822191   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 117/120
	I0717 00:51:43.823516   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 118/120
	I0717 00:51:44.824998   27820 main.go:141] libmachine: (ha-029113-m02) Waiting for machine to stop 119/120
	I0717 00:51:45.825871   27820 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 00:51:45.826007   27820 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-029113 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr: exit status 3 (18.975723944s)

                                                
                                                
-- stdout --
	ha-029113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-029113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:51:45.869021   28262 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:51:45.869150   28262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:51:45.869161   28262 out.go:304] Setting ErrFile to fd 2...
	I0717 00:51:45.869168   28262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:51:45.869351   28262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:51:45.869566   28262 out.go:298] Setting JSON to false
	I0717 00:51:45.869605   28262 mustload.go:65] Loading cluster: ha-029113
	I0717 00:51:45.869643   28262 notify.go:220] Checking for updates...
	I0717 00:51:45.870136   28262 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:51:45.870156   28262 status.go:255] checking status of ha-029113 ...
	I0717 00:51:45.870620   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:51:45.870685   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:51:45.888588   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40135
	I0717 00:51:45.889121   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:51:45.889657   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:51:45.889682   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:51:45.890099   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:51:45.890314   28262 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:51:45.891972   28262 status.go:330] ha-029113 host status = "Running" (err=<nil>)
	I0717 00:51:45.891986   28262 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:51:45.892281   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:51:45.892341   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:51:45.906749   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
	I0717 00:51:45.907147   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:51:45.907657   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:51:45.907691   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:51:45.907977   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:51:45.908159   28262 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:51:45.910898   28262 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:51:45.911283   28262 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:51:45.911315   28262 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:51:45.911442   28262 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:51:45.911736   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:51:45.911778   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:51:45.925568   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0717 00:51:45.925954   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:51:45.926408   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:51:45.926423   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:51:45.926745   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:51:45.926925   28262 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:51:45.927105   28262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:51:45.927130   28262 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:51:45.929738   28262 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:51:45.930099   28262 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:51:45.930134   28262 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:51:45.930231   28262 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:51:45.930397   28262 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:51:45.930574   28262 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:51:45.930690   28262 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:51:46.017011   28262 ssh_runner.go:195] Run: systemctl --version
	I0717 00:51:46.024441   28262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:51:46.040494   28262 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:51:46.040517   28262 api_server.go:166] Checking apiserver status ...
	I0717 00:51:46.040545   28262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:51:46.055452   28262 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0717 00:51:46.064294   28262 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:51:46.064334   28262 ssh_runner.go:195] Run: ls
	I0717 00:51:46.068706   28262 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:51:46.072842   28262 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:51:46.072859   28262 status.go:422] ha-029113 apiserver status = Running (err=<nil>)
	I0717 00:51:46.072868   28262 status.go:257] ha-029113 status: &{Name:ha-029113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:51:46.072881   28262 status.go:255] checking status of ha-029113-m02 ...
	I0717 00:51:46.073145   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:51:46.073177   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:51:46.087650   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38199
	I0717 00:51:46.088096   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:51:46.088685   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:51:46.088710   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:51:46.089058   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:51:46.089243   28262 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:51:46.090753   28262 status.go:330] ha-029113-m02 host status = "Running" (err=<nil>)
	I0717 00:51:46.090770   28262 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:51:46.091123   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:51:46.091163   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:51:46.105506   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I0717 00:51:46.105870   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:51:46.106324   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:51:46.106343   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:51:46.106642   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:51:46.106888   28262 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:51:46.109829   28262 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:51:46.110180   28262 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:51:46.110207   28262 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:51:46.110291   28262 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:51:46.110699   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:51:46.110753   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:51:46.125135   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I0717 00:51:46.125565   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:51:46.126025   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:51:46.126047   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:51:46.126379   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:51:46.126535   28262 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:51:46.126759   28262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:51:46.126789   28262 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:51:46.129066   28262 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:51:46.129429   28262 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:51:46.129453   28262 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:51:46.129556   28262 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:51:46.129708   28262 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:51:46.129821   28262 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:51:46.129949   28262 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	W0717 00:52:04.442743   28262 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.166:22: connect: no route to host
	W0717 00:52:04.442825   28262 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	E0717 00:52:04.442855   28262 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:04.442862   28262 status.go:257] ha-029113-m02 status: &{Name:ha-029113-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:52:04.442879   28262 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:04.442887   28262 status.go:255] checking status of ha-029113-m03 ...
	I0717 00:52:04.443208   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:04.443245   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:04.457654   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0717 00:52:04.458136   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:04.458617   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:52:04.458639   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:04.458917   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:04.459095   28262 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:52:04.460587   28262 status.go:330] ha-029113-m03 host status = "Running" (err=<nil>)
	I0717 00:52:04.460603   28262 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:04.460898   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:04.460940   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:04.475152   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33591
	I0717 00:52:04.475529   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:04.475950   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:52:04.475972   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:04.476285   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:04.476447   28262 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:52:04.479002   28262 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:04.479351   28262 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:04.479373   28262 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:04.479544   28262 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:04.479823   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:04.479852   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:04.493823   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0717 00:52:04.494212   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:04.494679   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:52:04.494708   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:04.495047   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:04.495223   28262 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:52:04.495414   28262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:04.495435   28262 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:52:04.497815   28262 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:04.498243   28262 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:04.498267   28262 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:04.498418   28262 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:52:04.498590   28262 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:52:04.498729   28262 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:52:04.498848   28262 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:52:04.583392   28262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:04.600340   28262 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:04.600363   28262 api_server.go:166] Checking apiserver status ...
	I0717 00:52:04.600390   28262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:04.616095   28262 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup
	W0717 00:52:04.626060   28262 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:04.626109   28262 ssh_runner.go:195] Run: ls
	I0717 00:52:04.630723   28262 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:04.637095   28262 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:04.637114   28262 status.go:422] ha-029113-m03 apiserver status = Running (err=<nil>)
	I0717 00:52:04.637121   28262 status.go:257] ha-029113-m03 status: &{Name:ha-029113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:04.637134   28262 status.go:255] checking status of ha-029113-m04 ...
	I0717 00:52:04.637432   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:04.637469   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:04.652623   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43067
	I0717 00:52:04.652993   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:04.653451   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:52:04.653471   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:04.653787   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:04.653960   28262 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:52:04.655528   28262 status.go:330] ha-029113-m04 host status = "Running" (err=<nil>)
	I0717 00:52:04.655560   28262 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:04.656073   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:04.656120   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:04.670030   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0717 00:52:04.670498   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:04.670938   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:52:04.670963   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:04.671338   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:04.671502   28262 main.go:141] libmachine: (ha-029113-m04) Calling .GetIP
	I0717 00:52:04.674529   28262 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:04.675123   28262 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:04.675163   28262 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:04.675318   28262 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:04.675709   28262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:04.675784   28262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:04.690894   28262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32899
	I0717 00:52:04.691292   28262 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:04.691734   28262 main.go:141] libmachine: Using API Version  1
	I0717 00:52:04.691754   28262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:04.692030   28262 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:04.692207   28262 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 00:52:04.692402   28262 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:04.692418   28262 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 00:52:04.694903   28262 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:04.695272   28262 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:04.695294   28262 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:04.695433   28262 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 00:52:04.695606   28262 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 00:52:04.695757   28262 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 00:52:04.695890   28262 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	I0717 00:52:04.783546   28262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:04.802035   28262 status.go:257] ha-029113-m04 status: &{Name:ha-029113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-029113 -n ha-029113
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-029113 logs -n 25: (1.402020605s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile695400083/001/cp-test_ha-029113-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113:/home/docker/cp-test_ha-029113-m03_ha-029113.txt                      |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113 sudo cat                                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m03_ha-029113.txt                                |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m02:/home/docker/cp-test_ha-029113-m03_ha-029113-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m02 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m03_ha-029113-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04:/home/docker/cp-test_ha-029113-m03_ha-029113-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m04 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m03_ha-029113-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp testdata/cp-test.txt                                               | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile695400083/001/cp-test_ha-029113-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113:/home/docker/cp-test_ha-029113-m04_ha-029113.txt                      |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113 sudo cat                                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113.txt                                |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m02:/home/docker/cp-test_ha-029113-m04_ha-029113-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m02 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03:/home/docker/cp-test_ha-029113-m04_ha-029113-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m03 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-029113 node stop m02 -v=7                                                    | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:43:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:43:29.629545   23443 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:43:29.629978   23443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:43:29.629990   23443 out.go:304] Setting ErrFile to fd 2...
	I0717 00:43:29.629995   23443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:43:29.630222   23443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:43:29.630815   23443 out.go:298] Setting JSON to false
	I0717 00:43:29.631669   23443 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1552,"bootTime":1721175458,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:43:29.631721   23443 start.go:139] virtualization: kvm guest
	I0717 00:43:29.633685   23443 out.go:177] * [ha-029113] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:43:29.634964   23443 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:43:29.635030   23443 notify.go:220] Checking for updates...
	I0717 00:43:29.637312   23443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:43:29.638523   23443 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:43:29.639779   23443 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:43:29.640930   23443 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:43:29.642067   23443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:43:29.643437   23443 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:43:29.676662   23443 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 00:43:29.677927   23443 start.go:297] selected driver: kvm2
	I0717 00:43:29.677944   23443 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:43:29.677955   23443 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:43:29.678643   23443 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:43:29.678723   23443 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:43:29.692865   23443 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:43:29.692924   23443 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:43:29.693150   23443 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:43:29.693214   23443 cni.go:84] Creating CNI manager for ""
	I0717 00:43:29.693229   23443 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 00:43:29.693237   23443 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:43:29.693307   23443 start.go:340] cluster config:
	{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0717 00:43:29.693410   23443 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:43:29.695024   23443 out.go:177] * Starting "ha-029113" primary control-plane node in "ha-029113" cluster
	I0717 00:43:29.696289   23443 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:43:29.696321   23443 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:43:29.696333   23443 cache.go:56] Caching tarball of preloaded images
	I0717 00:43:29.696403   23443 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:43:29.696417   23443 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:43:29.696734   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:43:29.696756   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json: {Name:mk1c70be09fae3a15c6dd239577cad4b9c0c123e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:29.696900   23443 start.go:360] acquireMachinesLock for ha-029113: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:43:29.696933   23443 start.go:364] duration metric: took 18.392µs to acquireMachinesLock for "ha-029113"
	I0717 00:43:29.696955   23443 start.go:93] Provisioning new machine with config: &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:43:29.697014   23443 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 00:43:29.699183   23443 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:43:29.699293   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:43:29.699332   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:43:29.712954   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I0717 00:43:29.713304   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:43:29.713743   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:43:29.713764   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:43:29.714016   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:43:29.714197   23443 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:43:29.714312   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:29.714431   23443 start.go:159] libmachine.API.Create for "ha-029113" (driver="kvm2")
	I0717 00:43:29.714457   23443 client.go:168] LocalClient.Create starting
	I0717 00:43:29.714479   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 00:43:29.714505   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:43:29.714524   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:43:29.714602   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 00:43:29.714630   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:43:29.714644   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:43:29.714660   23443 main.go:141] libmachine: Running pre-create checks...
	I0717 00:43:29.714668   23443 main.go:141] libmachine: (ha-029113) Calling .PreCreateCheck
	I0717 00:43:29.715013   23443 main.go:141] libmachine: (ha-029113) Calling .GetConfigRaw
	I0717 00:43:29.715344   23443 main.go:141] libmachine: Creating machine...
	I0717 00:43:29.715356   23443 main.go:141] libmachine: (ha-029113) Calling .Create
	I0717 00:43:29.715468   23443 main.go:141] libmachine: (ha-029113) Creating KVM machine...
	I0717 00:43:29.716586   23443 main.go:141] libmachine: (ha-029113) DBG | found existing default KVM network
	I0717 00:43:29.717158   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:29.717044   23466 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0717 00:43:29.717184   23443 main.go:141] libmachine: (ha-029113) DBG | created network xml: 
	I0717 00:43:29.717196   23443 main.go:141] libmachine: (ha-029113) DBG | <network>
	I0717 00:43:29.717207   23443 main.go:141] libmachine: (ha-029113) DBG |   <name>mk-ha-029113</name>
	I0717 00:43:29.717215   23443 main.go:141] libmachine: (ha-029113) DBG |   <dns enable='no'/>
	I0717 00:43:29.717225   23443 main.go:141] libmachine: (ha-029113) DBG |   
	I0717 00:43:29.717236   23443 main.go:141] libmachine: (ha-029113) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 00:43:29.717244   23443 main.go:141] libmachine: (ha-029113) DBG |     <dhcp>
	I0717 00:43:29.717251   23443 main.go:141] libmachine: (ha-029113) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 00:43:29.717258   23443 main.go:141] libmachine: (ha-029113) DBG |     </dhcp>
	I0717 00:43:29.717283   23443 main.go:141] libmachine: (ha-029113) DBG |   </ip>
	I0717 00:43:29.717305   23443 main.go:141] libmachine: (ha-029113) DBG |   
	I0717 00:43:29.717316   23443 main.go:141] libmachine: (ha-029113) DBG | </network>
	I0717 00:43:29.717326   23443 main.go:141] libmachine: (ha-029113) DBG | 
	I0717 00:43:29.722037   23443 main.go:141] libmachine: (ha-029113) DBG | trying to create private KVM network mk-ha-029113 192.168.39.0/24...
	I0717 00:43:29.783430   23443 main.go:141] libmachine: (ha-029113) DBG | private KVM network mk-ha-029113 192.168.39.0/24 created
	I0717 00:43:29.783474   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:29.783394   23466 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:43:29.783485   23443 main.go:141] libmachine: (ha-029113) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113 ...
	I0717 00:43:29.783502   23443 main.go:141] libmachine: (ha-029113) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 00:43:29.783528   23443 main.go:141] libmachine: (ha-029113) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 00:43:30.013619   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:30.013452   23466 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa...
	I0717 00:43:30.283548   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:30.283435   23466 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/ha-029113.rawdisk...
	I0717 00:43:30.283588   23443 main.go:141] libmachine: (ha-029113) DBG | Writing magic tar header
	I0717 00:43:30.283610   23443 main.go:141] libmachine: (ha-029113) DBG | Writing SSH key tar header
	I0717 00:43:30.283620   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:30.283558   23466 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113 ...
	I0717 00:43:30.283736   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113
	I0717 00:43:30.283771   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 00:43:30.283787   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113 (perms=drwx------)
	I0717 00:43:30.283808   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:43:30.283830   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 00:43:30.283852   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:43:30.283870   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 00:43:30.283884   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 00:43:30.283900   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:43:30.283913   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:43:30.283926   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:43:30.283938   23443 main.go:141] libmachine: (ha-029113) Creating domain...
	I0717 00:43:30.283947   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:43:30.283962   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home
	I0717 00:43:30.283972   23443 main.go:141] libmachine: (ha-029113) DBG | Skipping /home - not owner
	I0717 00:43:30.284856   23443 main.go:141] libmachine: (ha-029113) define libvirt domain using xml: 
	I0717 00:43:30.284873   23443 main.go:141] libmachine: (ha-029113) <domain type='kvm'>
	I0717 00:43:30.284880   23443 main.go:141] libmachine: (ha-029113)   <name>ha-029113</name>
	I0717 00:43:30.284887   23443 main.go:141] libmachine: (ha-029113)   <memory unit='MiB'>2200</memory>
	I0717 00:43:30.284897   23443 main.go:141] libmachine: (ha-029113)   <vcpu>2</vcpu>
	I0717 00:43:30.284907   23443 main.go:141] libmachine: (ha-029113)   <features>
	I0717 00:43:30.284914   23443 main.go:141] libmachine: (ha-029113)     <acpi/>
	I0717 00:43:30.284923   23443 main.go:141] libmachine: (ha-029113)     <apic/>
	I0717 00:43:30.284928   23443 main.go:141] libmachine: (ha-029113)     <pae/>
	I0717 00:43:30.284937   23443 main.go:141] libmachine: (ha-029113)     
	I0717 00:43:30.284945   23443 main.go:141] libmachine: (ha-029113)   </features>
	I0717 00:43:30.284949   23443 main.go:141] libmachine: (ha-029113)   <cpu mode='host-passthrough'>
	I0717 00:43:30.284956   23443 main.go:141] libmachine: (ha-029113)   
	I0717 00:43:30.284963   23443 main.go:141] libmachine: (ha-029113)   </cpu>
	I0717 00:43:30.284989   23443 main.go:141] libmachine: (ha-029113)   <os>
	I0717 00:43:30.285013   23443 main.go:141] libmachine: (ha-029113)     <type>hvm</type>
	I0717 00:43:30.285024   23443 main.go:141] libmachine: (ha-029113)     <boot dev='cdrom'/>
	I0717 00:43:30.285037   23443 main.go:141] libmachine: (ha-029113)     <boot dev='hd'/>
	I0717 00:43:30.285063   23443 main.go:141] libmachine: (ha-029113)     <bootmenu enable='no'/>
	I0717 00:43:30.285082   23443 main.go:141] libmachine: (ha-029113)   </os>
	I0717 00:43:30.285094   23443 main.go:141] libmachine: (ha-029113)   <devices>
	I0717 00:43:30.285106   23443 main.go:141] libmachine: (ha-029113)     <disk type='file' device='cdrom'>
	I0717 00:43:30.285123   23443 main.go:141] libmachine: (ha-029113)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/boot2docker.iso'/>
	I0717 00:43:30.285134   23443 main.go:141] libmachine: (ha-029113)       <target dev='hdc' bus='scsi'/>
	I0717 00:43:30.285146   23443 main.go:141] libmachine: (ha-029113)       <readonly/>
	I0717 00:43:30.285160   23443 main.go:141] libmachine: (ha-029113)     </disk>
	I0717 00:43:30.285173   23443 main.go:141] libmachine: (ha-029113)     <disk type='file' device='disk'>
	I0717 00:43:30.285186   23443 main.go:141] libmachine: (ha-029113)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:43:30.285209   23443 main.go:141] libmachine: (ha-029113)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/ha-029113.rawdisk'/>
	I0717 00:43:30.285219   23443 main.go:141] libmachine: (ha-029113)       <target dev='hda' bus='virtio'/>
	I0717 00:43:30.285236   23443 main.go:141] libmachine: (ha-029113)     </disk>
	I0717 00:43:30.285252   23443 main.go:141] libmachine: (ha-029113)     <interface type='network'>
	I0717 00:43:30.285265   23443 main.go:141] libmachine: (ha-029113)       <source network='mk-ha-029113'/>
	I0717 00:43:30.285275   23443 main.go:141] libmachine: (ha-029113)       <model type='virtio'/>
	I0717 00:43:30.285284   23443 main.go:141] libmachine: (ha-029113)     </interface>
	I0717 00:43:30.285289   23443 main.go:141] libmachine: (ha-029113)     <interface type='network'>
	I0717 00:43:30.285294   23443 main.go:141] libmachine: (ha-029113)       <source network='default'/>
	I0717 00:43:30.285303   23443 main.go:141] libmachine: (ha-029113)       <model type='virtio'/>
	I0717 00:43:30.285315   23443 main.go:141] libmachine: (ha-029113)     </interface>
	I0717 00:43:30.285328   23443 main.go:141] libmachine: (ha-029113)     <serial type='pty'>
	I0717 00:43:30.285339   23443 main.go:141] libmachine: (ha-029113)       <target port='0'/>
	I0717 00:43:30.285349   23443 main.go:141] libmachine: (ha-029113)     </serial>
	I0717 00:43:30.285361   23443 main.go:141] libmachine: (ha-029113)     <console type='pty'>
	I0717 00:43:30.285371   23443 main.go:141] libmachine: (ha-029113)       <target type='serial' port='0'/>
	I0717 00:43:30.285382   23443 main.go:141] libmachine: (ha-029113)     </console>
	I0717 00:43:30.285392   23443 main.go:141] libmachine: (ha-029113)     <rng model='virtio'>
	I0717 00:43:30.285408   23443 main.go:141] libmachine: (ha-029113)       <backend model='random'>/dev/random</backend>
	I0717 00:43:30.285420   23443 main.go:141] libmachine: (ha-029113)     </rng>
	I0717 00:43:30.285428   23443 main.go:141] libmachine: (ha-029113)     
	I0717 00:43:30.285437   23443 main.go:141] libmachine: (ha-029113)     
	I0717 00:43:30.285445   23443 main.go:141] libmachine: (ha-029113)   </devices>
	I0717 00:43:30.285454   23443 main.go:141] libmachine: (ha-029113) </domain>
	I0717 00:43:30.285463   23443 main.go:141] libmachine: (ha-029113) 
	I0717 00:43:30.289368   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:65:21:b3 in network default
	I0717 00:43:30.289864   23443 main.go:141] libmachine: (ha-029113) Ensuring networks are active...
	I0717 00:43:30.289894   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:30.290659   23443 main.go:141] libmachine: (ha-029113) Ensuring network default is active
	I0717 00:43:30.290955   23443 main.go:141] libmachine: (ha-029113) Ensuring network mk-ha-029113 is active
	I0717 00:43:30.291367   23443 main.go:141] libmachine: (ha-029113) Getting domain xml...
	I0717 00:43:30.291994   23443 main.go:141] libmachine: (ha-029113) Creating domain...
	I0717 00:43:31.452349   23443 main.go:141] libmachine: (ha-029113) Waiting to get IP...
	I0717 00:43:31.453202   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:31.453570   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:31.453615   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:31.453562   23466 retry.go:31] will retry after 251.741638ms: waiting for machine to come up
	I0717 00:43:31.706967   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:31.707410   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:31.707440   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:31.707366   23466 retry.go:31] will retry after 295.804163ms: waiting for machine to come up
	I0717 00:43:32.004697   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:32.005111   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:32.005146   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:32.005081   23466 retry.go:31] will retry after 353.624289ms: waiting for machine to come up
	I0717 00:43:32.360538   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:32.360981   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:32.361019   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:32.360949   23466 retry.go:31] will retry after 608.253018ms: waiting for machine to come up
	I0717 00:43:32.970606   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:32.971060   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:32.971080   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:32.971017   23466 retry.go:31] will retry after 543.533236ms: waiting for machine to come up
	I0717 00:43:33.515677   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:33.516113   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:33.516135   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:33.516069   23466 retry.go:31] will retry after 696.415589ms: waiting for machine to come up
	I0717 00:43:34.213929   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:34.214271   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:34.214300   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:34.214233   23466 retry.go:31] will retry after 1.080255731s: waiting for machine to come up
	I0717 00:43:35.295986   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:35.296445   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:35.296474   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:35.296400   23466 retry.go:31] will retry after 1.222285687s: waiting for machine to come up
	I0717 00:43:36.520660   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:36.520986   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:36.521007   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:36.520942   23466 retry.go:31] will retry after 1.580634952s: waiting for machine to come up
	I0717 00:43:38.103829   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:38.104184   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:38.104211   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:38.104144   23466 retry.go:31] will retry after 1.42041846s: waiting for machine to come up
	I0717 00:43:39.526530   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:39.526916   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:39.526938   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:39.526872   23466 retry.go:31] will retry after 2.750366058s: waiting for machine to come up
	I0717 00:43:42.280613   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:42.281014   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:42.281036   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:42.280965   23466 retry.go:31] will retry after 2.193861337s: waiting for machine to come up
	I0717 00:43:44.477108   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:44.477528   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:44.477556   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:44.477486   23466 retry.go:31] will retry after 4.450517174s: waiting for machine to come up
	I0717 00:43:48.932343   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:48.932710   23443 main.go:141] libmachine: (ha-029113) Found IP for machine: 192.168.39.95
	I0717 00:43:48.932738   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has current primary IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:48.932748   23443 main.go:141] libmachine: (ha-029113) Reserving static IP address...
	I0717 00:43:48.933048   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find host DHCP lease matching {name: "ha-029113", mac: "52:54:00:04:d5:10", ip: "192.168.39.95"} in network mk-ha-029113
	I0717 00:43:49.000960   23443 main.go:141] libmachine: (ha-029113) DBG | Getting to WaitForSSH function...
	I0717 00:43:49.000990   23443 main.go:141] libmachine: (ha-029113) Reserved static IP address: 192.168.39.95
	I0717 00:43:49.001004   23443 main.go:141] libmachine: (ha-029113) Waiting for SSH to be available...
	I0717 00:43:49.003222   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.003581   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.003610   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.003711   23443 main.go:141] libmachine: (ha-029113) DBG | Using SSH client type: external
	I0717 00:43:49.003738   23443 main.go:141] libmachine: (ha-029113) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa (-rw-------)
	I0717 00:43:49.003801   23443 main.go:141] libmachine: (ha-029113) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:43:49.003827   23443 main.go:141] libmachine: (ha-029113) DBG | About to run SSH command:
	I0717 00:43:49.003841   23443 main.go:141] libmachine: (ha-029113) DBG | exit 0
	I0717 00:43:49.122787   23443 main.go:141] libmachine: (ha-029113) DBG | SSH cmd err, output: <nil>: 
	I0717 00:43:49.123082   23443 main.go:141] libmachine: (ha-029113) KVM machine creation complete!
	I0717 00:43:49.123382   23443 main.go:141] libmachine: (ha-029113) Calling .GetConfigRaw
	I0717 00:43:49.123867   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:49.124050   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:49.124224   23443 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:43:49.124237   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:43:49.125436   23443 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:43:49.125451   23443 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:43:49.125458   23443 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:43:49.125466   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.127516   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.127838   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.127863   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.128020   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.128182   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.128327   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.128442   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.128595   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:49.128801   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:49.128813   23443 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:43:49.225819   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:43:49.225846   23443 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:43:49.225853   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.228489   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.228847   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.228884   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.228985   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.229168   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.229332   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.229488   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.229640   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:49.229857   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:49.229869   23443 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:43:49.327057   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:43:49.327117   23443 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:43:49.327124   23443 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:43:49.327131   23443 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:43:49.327402   23443 buildroot.go:166] provisioning hostname "ha-029113"
	I0717 00:43:49.327424   23443 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:43:49.327598   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.330014   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.330293   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.330316   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.330473   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.330644   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.330799   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.330893   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.331039   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:49.331199   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:49.331210   23443 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-029113 && echo "ha-029113" | sudo tee /etc/hostname
	I0717 00:43:49.440387   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-029113
	
	I0717 00:43:49.440417   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.443377   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.443768   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.443795   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.443922   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.444082   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.444246   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.444378   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.444538   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:49.444761   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:49.444778   23443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-029113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-029113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-029113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:43:49.551255   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:43:49.551292   23443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 00:43:49.551332   23443 buildroot.go:174] setting up certificates
	I0717 00:43:49.551347   23443 provision.go:84] configureAuth start
	I0717 00:43:49.551363   23443 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:43:49.551614   23443 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:43:49.553979   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.554336   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.554356   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.554521   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.556388   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.556655   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.556672   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.556784   23443 provision.go:143] copyHostCerts
	I0717 00:43:49.556822   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:43:49.556868   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 00:43:49.556885   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:43:49.556962   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 00:43:49.557078   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:43:49.557104   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 00:43:49.557110   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:43:49.557149   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 00:43:49.557222   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:43:49.557246   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 00:43:49.557254   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:43:49.557284   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 00:43:49.557360   23443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.ha-029113 san=[127.0.0.1 192.168.39.95 ha-029113 localhost minikube]
	I0717 00:43:49.682206   23443 provision.go:177] copyRemoteCerts
	I0717 00:43:49.682256   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:43:49.682277   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.684463   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.684771   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.684791   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.684987   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.685185   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.685330   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.685462   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:43:49.764376   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:43:49.764434   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:43:49.788963   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:43:49.789032   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 00:43:49.811677   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:43:49.811767   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 00:43:49.834236   23443 provision.go:87] duration metric: took 282.873795ms to configureAuth
	I0717 00:43:49.834259   23443 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:43:49.834405   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:43:49.834466   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.836925   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.837234   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.837270   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.837433   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.837598   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.837767   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.837874   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.838017   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:49.838176   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:49.838193   23443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:43:50.091148   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:43:50.091195   23443 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:43:50.091205   23443 main.go:141] libmachine: (ha-029113) Calling .GetURL
	I0717 00:43:50.092384   23443 main.go:141] libmachine: (ha-029113) DBG | Using libvirt version 6000000
	I0717 00:43:50.094200   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.094518   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.094567   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.094723   23443 main.go:141] libmachine: Docker is up and running!
	I0717 00:43:50.094736   23443 main.go:141] libmachine: Reticulating splines...
	I0717 00:43:50.094743   23443 client.go:171] duration metric: took 20.380279073s to LocalClient.Create
	I0717 00:43:50.094772   23443 start.go:167] duration metric: took 20.380340167s to libmachine.API.Create "ha-029113"
	I0717 00:43:50.094784   23443 start.go:293] postStartSetup for "ha-029113" (driver="kvm2")
	I0717 00:43:50.094798   23443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:43:50.094817   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:50.095041   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:43:50.095063   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:50.096900   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.097192   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.097217   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.097334   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:50.097500   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:50.097665   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:50.097781   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:43:50.176944   23443 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:43:50.181306   23443 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:43:50.181335   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 00:43:50.181410   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 00:43:50.181479   23443 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 00:43:50.181488   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /etc/ssl/certs/112592.pem
	I0717 00:43:50.181568   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:43:50.191227   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:43:50.220997   23443 start.go:296] duration metric: took 126.177076ms for postStartSetup
	I0717 00:43:50.221057   23443 main.go:141] libmachine: (ha-029113) Calling .GetConfigRaw
	I0717 00:43:50.221589   23443 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:43:50.223904   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.224228   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.224247   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.224461   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:43:50.224644   23443 start.go:128] duration metric: took 20.527614062s to createHost
	I0717 00:43:50.224666   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:50.226756   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.227035   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.227065   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.227197   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:50.227359   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:50.227510   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:50.227616   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:50.227762   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:50.227915   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:50.227926   23443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:43:50.323134   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177030.294684965
	
	I0717 00:43:50.323157   23443 fix.go:216] guest clock: 1721177030.294684965
	I0717 00:43:50.323164   23443 fix.go:229] Guest: 2024-07-17 00:43:50.294684965 +0000 UTC Remote: 2024-07-17 00:43:50.22465597 +0000 UTC m=+20.626931124 (delta=70.028995ms)
	I0717 00:43:50.323181   23443 fix.go:200] guest clock delta is within tolerance: 70.028995ms
	I0717 00:43:50.323185   23443 start.go:83] releasing machines lock for "ha-029113", held for 20.626243015s
	I0717 00:43:50.323202   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:50.323438   23443 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:43:50.325943   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.326247   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.326270   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.326424   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:50.326971   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:50.327114   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:50.327206   23443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:43:50.327251   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:50.327310   23443 ssh_runner.go:195] Run: cat /version.json
	I0717 00:43:50.327329   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:50.329532   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.329612   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.329868   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.329892   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.329921   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.329935   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.330012   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:50.330194   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:50.330223   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:50.330360   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:50.330362   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:50.330566   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:50.330568   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:43:50.330691   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:43:50.403490   23443 ssh_runner.go:195] Run: systemctl --version
	I0717 00:43:50.432889   23443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:43:50.591008   23443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:43:50.597593   23443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:43:50.597675   23443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:43:50.613254   23443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:43:50.613277   23443 start.go:495] detecting cgroup driver to use...
	I0717 00:43:50.613329   23443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:43:50.629634   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:43:50.642915   23443 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:43:50.642960   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:43:50.655986   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:43:50.669044   23443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:43:50.787054   23443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:43:50.929244   23443 docker.go:233] disabling docker service ...
	I0717 00:43:50.929296   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:43:50.943183   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:43:50.956184   23443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:43:51.091625   23443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:43:51.205309   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:43:51.220248   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:43:51.239678   23443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:43:51.239741   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.251038   23443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:43:51.251098   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.262896   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.273907   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.284540   23443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:43:51.295275   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.307215   23443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.325827   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.337698   23443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:43:51.348529   23443 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:43:51.348583   23443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:43:51.363155   23443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:43:51.374462   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:43:51.494995   23443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:43:51.627737   23443 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:43:51.627820   23443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:43:51.632591   23443 start.go:563] Will wait 60s for crictl version
	I0717 00:43:51.632647   23443 ssh_runner.go:195] Run: which crictl
	I0717 00:43:51.636364   23443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:43:51.679301   23443 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:43:51.679382   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:43:51.707621   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:43:51.738137   23443 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:43:51.739528   23443 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:43:51.742125   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:51.742461   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:51.742485   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:51.742721   23443 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:43:51.746846   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:43:51.759830   23443 kubeadm.go:883] updating cluster {Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:43:51.759923   23443 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:43:51.759959   23443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:43:51.791556   23443 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 00:43:51.791627   23443 ssh_runner.go:195] Run: which lz4
	I0717 00:43:51.795469   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 00:43:51.795576   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 00:43:51.799673   23443 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 00:43:51.799699   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 00:43:53.192514   23443 crio.go:462] duration metric: took 1.396967984s to copy over tarball
	I0717 00:43:53.192594   23443 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 00:43:55.283467   23443 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.09084036s)
	I0717 00:43:55.283502   23443 crio.go:469] duration metric: took 2.090961191s to extract the tarball
	I0717 00:43:55.283512   23443 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 00:43:55.320520   23443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:43:55.362789   23443 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:43:55.362814   23443 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:43:55.362822   23443 kubeadm.go:934] updating node { 192.168.39.95 8443 v1.30.2 crio true true} ...
	I0717 00:43:55.362950   23443 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-029113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:43:55.363039   23443 ssh_runner.go:195] Run: crio config
	I0717 00:43:55.413791   23443 cni.go:84] Creating CNI manager for ""
	I0717 00:43:55.413813   23443 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 00:43:55.413824   23443 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:43:55.413851   23443 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-029113 NodeName:ha-029113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:43:55.414008   23443 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-029113"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:43:55.414037   23443 kube-vip.go:115] generating kube-vip config ...
	I0717 00:43:55.414091   23443 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:43:55.430120   23443 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:43:55.430234   23443 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:43:55.430303   23443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:43:55.439877   23443 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:43:55.439931   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 00:43:55.448975   23443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0717 00:43:55.464948   23443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:43:55.480422   23443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0717 00:43:55.496473   23443 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0717 00:43:55.513844   23443 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:43:55.518038   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:43:55.530981   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:43:55.656193   23443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:43:55.672985   23443 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113 for IP: 192.168.39.95
	I0717 00:43:55.673006   23443 certs.go:194] generating shared ca certs ...
	I0717 00:43:55.673026   23443 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:55.673195   23443 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 00:43:55.673247   23443 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 00:43:55.673261   23443 certs.go:256] generating profile certs ...
	I0717 00:43:55.673318   23443 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key
	I0717 00:43:55.673336   23443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt with IP's: []
	I0717 00:43:55.804202   23443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt ...
	I0717 00:43:55.804230   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt: {Name:mkaad8f228a6769c319165d4356d6d5b16d56f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:55.804396   23443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key ...
	I0717 00:43:55.804410   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key: {Name:mkb1b523099783e05b4d547548032d6d46313696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:55.804508   23443 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.8898c4fa
	I0717 00:43:55.804526   23443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.8898c4fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.95 192.168.39.254]
	I0717 00:43:56.060272   23443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.8898c4fa ...
	I0717 00:43:56.060300   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.8898c4fa: {Name:mk1cada8fdbc736c986089a0c0ad728ff94f64e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:56.060469   23443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.8898c4fa ...
	I0717 00:43:56.060490   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.8898c4fa: {Name:mk99ce3174b978eb325285f1a4d20c9add85d0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:56.060579   23443 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.8898c4fa -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt
	I0717 00:43:56.060663   23443 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.8898c4fa -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key
	I0717 00:43:56.060714   23443 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key
	I0717 00:43:56.060730   23443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt with IP's: []
	I0717 00:43:56.226632   23443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt ...
	I0717 00:43:56.226678   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt: {Name:mkd34e7f758ab0a3926b993b1f8abc99e6f69e10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:56.226822   23443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key ...
	I0717 00:43:56.226833   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key: {Name:mke656fc7c4f8fcbd8e910a166a066c5be919b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:56.226899   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:43:56.226926   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:43:56.226946   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:43:56.226960   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:43:56.226970   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:43:56.226984   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:43:56.226996   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:43:56.227006   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:43:56.227048   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 00:43:56.227079   23443 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 00:43:56.227088   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:43:56.227108   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:43:56.227130   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:43:56.227150   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 00:43:56.227185   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:43:56.227209   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem -> /usr/share/ca-certificates/11259.pem
	I0717 00:43:56.227223   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /usr/share/ca-certificates/112592.pem
	I0717 00:43:56.227235   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:43:56.227757   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:43:56.253230   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:43:56.276650   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:43:56.302929   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:43:56.328994   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 00:43:56.352938   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:43:56.375986   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:43:56.399130   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:43:56.424959   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 00:43:56.457813   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 00:43:56.483719   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:43:56.510891   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:43:56.527362   23443 ssh_runner.go:195] Run: openssl version
	I0717 00:43:56.533004   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 00:43:56.543902   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 00:43:56.548734   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 00:43:56.548782   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 00:43:56.554857   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 00:43:56.566468   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 00:43:56.578154   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 00:43:56.582940   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 00:43:56.582997   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 00:43:56.588670   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:43:56.599501   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:43:56.609938   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:43:56.614241   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:43:56.614290   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:43:56.619757   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:43:56.630726   23443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:43:56.635248   23443 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:43:56.635308   23443 kubeadm.go:392] StartCluster: {Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:43:56.635418   23443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:43:56.635488   23443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:43:56.681831   23443 cri.go:89] found id: ""
	I0717 00:43:56.681897   23443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:43:56.692128   23443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:43:56.704885   23443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:43:56.716080   23443 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:43:56.716100   23443 kubeadm.go:157] found existing configuration files:
	
	I0717 00:43:56.716147   23443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:43:56.725477   23443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:43:56.725541   23443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:43:56.735884   23443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:43:56.745410   23443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:43:56.745460   23443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:43:56.754798   23443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:43:56.763615   23443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:43:56.763668   23443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:43:56.772871   23443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:43:56.781620   23443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:43:56.781668   23443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:43:56.790667   23443 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 00:43:56.890978   23443 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:43:56.891081   23443 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:43:57.019005   23443 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:43:57.019160   23443 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:43:57.019320   23443 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:43:57.248022   23443 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:43:57.347304   23443 out.go:204]   - Generating certificates and keys ...
	I0717 00:43:57.347402   23443 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:43:57.347495   23443 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:43:57.347565   23443 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:43:57.454502   23443 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:43:57.512789   23443 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:43:57.603687   23443 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:43:57.721136   23443 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:43:57.721275   23443 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-029113 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	I0717 00:43:57.867674   23443 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:43:57.867819   23443 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-029113 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	I0717 00:43:58.019368   23443 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:43:58.215990   23443 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:43:58.306221   23443 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:43:58.306316   23443 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:43:58.385599   23443 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:43:58.716664   23443 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:43:59.138773   23443 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:43:59.443407   23443 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:43:59.523429   23443 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:43:59.523961   23443 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:43:59.526339   23443 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:43:59.529334   23443 out.go:204]   - Booting up control plane ...
	I0717 00:43:59.529447   23443 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:43:59.529556   23443 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:43:59.529647   23443 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:43:59.544877   23443 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:43:59.545872   23443 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:43:59.545934   23443 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:43:59.667902   23443 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:43:59.668006   23443 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:44:00.669499   23443 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002393794s
	I0717 00:44:00.669627   23443 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:44:06.460981   23443 kubeadm.go:310] [api-check] The API server is healthy after 5.795437316s
	I0717 00:44:06.474308   23443 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:44:06.491501   23443 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:44:07.015909   23443 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:44:07.016365   23443 kubeadm.go:310] [mark-control-plane] Marking the node ha-029113 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:44:07.030474   23443 kubeadm.go:310] [bootstrap-token] Using token: obton2.k2oggi6v8c13i9u1
	I0717 00:44:07.032016   23443 out.go:204]   - Configuring RBAC rules ...
	I0717 00:44:07.032136   23443 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:44:07.047364   23443 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:44:07.059970   23443 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:44:07.063616   23443 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:44:07.066683   23443 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:44:07.069722   23443 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:44:07.085350   23443 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:44:07.328276   23443 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:44:07.869315   23443 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:44:07.870449   23443 kubeadm.go:310] 
	I0717 00:44:07.870530   23443 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:44:07.870567   23443 kubeadm.go:310] 
	I0717 00:44:07.870649   23443 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:44:07.870661   23443 kubeadm.go:310] 
	I0717 00:44:07.870694   23443 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:44:07.870771   23443 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:44:07.870857   23443 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:44:07.870878   23443 kubeadm.go:310] 
	I0717 00:44:07.870955   23443 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:44:07.870965   23443 kubeadm.go:310] 
	I0717 00:44:07.871037   23443 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:44:07.871046   23443 kubeadm.go:310] 
	I0717 00:44:07.871101   23443 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:44:07.871219   23443 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:44:07.871323   23443 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:44:07.871332   23443 kubeadm.go:310] 
	I0717 00:44:07.871433   23443 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:44:07.871546   23443 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:44:07.871558   23443 kubeadm.go:310] 
	I0717 00:44:07.871699   23443 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token obton2.k2oggi6v8c13i9u1 \
	I0717 00:44:07.871850   23443 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 \
	I0717 00:44:07.871885   23443 kubeadm.go:310] 	--control-plane 
	I0717 00:44:07.871895   23443 kubeadm.go:310] 
	I0717 00:44:07.872010   23443 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:44:07.872026   23443 kubeadm.go:310] 
	I0717 00:44:07.872114   23443 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token obton2.k2oggi6v8c13i9u1 \
	I0717 00:44:07.872234   23443 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 
	I0717 00:44:07.872598   23443 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:44:07.872631   23443 cni.go:84] Creating CNI manager for ""
	I0717 00:44:07.872640   23443 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 00:44:07.874487   23443 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 00:44:07.875819   23443 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 00:44:07.881431   23443 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 00:44:07.881446   23443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 00:44:07.900692   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 00:44:08.266080   23443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:44:08.266166   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:08.266166   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-029113 minikube.k8s.io/updated_at=2024_07_17T00_44_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=ha-029113 minikube.k8s.io/primary=true
	I0717 00:44:08.296833   23443 ops.go:34] apiserver oom_adj: -16
	I0717 00:44:08.400872   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:08.901716   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:09.401209   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:09.901018   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:10.401927   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:10.901326   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:11.401637   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:11.901433   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:12.401566   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:12.901145   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:13.401527   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:13.901707   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:14.401171   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:14.901442   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:15.401081   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:15.901648   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:16.400943   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:16.901912   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:17.401787   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:17.901606   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:18.400906   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:18.901072   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:19.401069   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:19.900893   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:20.401221   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:20.579996   23443 kubeadm.go:1113] duration metric: took 12.313890698s to wait for elevateKubeSystemPrivileges
	I0717 00:44:20.580025   23443 kubeadm.go:394] duration metric: took 23.944721508s to StartCluster
	I0717 00:44:20.580071   23443 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:20.580158   23443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:44:20.580921   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:20.581135   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:44:20.581164   23443 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:44:20.581191   23443 start.go:241] waiting for startup goroutines ...
	I0717 00:44:20.581195   23443 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 00:44:20.581284   23443 addons.go:69] Setting storage-provisioner=true in profile "ha-029113"
	I0717 00:44:20.581320   23443 addons.go:234] Setting addon storage-provisioner=true in "ha-029113"
	I0717 00:44:20.581334   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:44:20.581384   23443 addons.go:69] Setting default-storageclass=true in profile "ha-029113"
	I0717 00:44:20.581390   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:44:20.581422   23443 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-029113"
	I0717 00:44:20.581900   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:20.581928   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:20.581934   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:20.581959   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:20.597289   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I0717 00:44:20.597294   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35225
	I0717 00:44:20.597847   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:20.597855   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:20.598376   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:20.598395   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:20.598564   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:20.598585   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:20.598775   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:20.598931   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:20.599006   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:44:20.599491   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:20.599535   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:20.601206   23443 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:44:20.601426   23443 kapi.go:59] client config for ha-029113: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt", KeyFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key", CAFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 00:44:20.601991   23443 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 00:44:20.602098   23443 addons.go:234] Setting addon default-storageclass=true in "ha-029113"
	I0717 00:44:20.602127   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:44:20.602358   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:20.602386   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:20.614727   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0717 00:44:20.615263   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:20.615830   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:20.615855   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:20.616213   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:20.616416   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:44:20.616795   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
	I0717 00:44:20.617330   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:20.617801   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:20.617818   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:20.618259   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:44:20.618262   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:20.618860   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:20.618899   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:20.620026   23443 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:44:20.621619   23443 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:44:20.621643   23443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:44:20.621669   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:44:20.624841   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:20.625333   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:44:20.625357   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:20.625516   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:44:20.625713   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:44:20.625875   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:44:20.626037   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:44:20.634274   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0717 00:44:20.634621   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:20.635059   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:20.635076   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:20.635431   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:20.635599   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:44:20.636995   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:44:20.637191   23443 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:44:20.637206   23443 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:44:20.637229   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:44:20.640007   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:20.640333   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:44:20.640353   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:20.640524   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:44:20.640691   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:44:20.640820   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:44:20.640943   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:44:20.792736   23443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:44:20.796311   23443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:44:20.796951   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:44:21.485321   23443 main.go:141] libmachine: Making call to close driver server
	I0717 00:44:21.485341   23443 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 00:44:21.485351   23443 main.go:141] libmachine: (ha-029113) Calling .Close
	I0717 00:44:21.485432   23443 main.go:141] libmachine: Making call to close driver server
	I0717 00:44:21.485451   23443 main.go:141] libmachine: (ha-029113) Calling .Close
	I0717 00:44:21.485622   23443 main.go:141] libmachine: (ha-029113) DBG | Closing plugin on server side
	I0717 00:44:21.485663   23443 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:44:21.485665   23443 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:44:21.485670   23443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:44:21.485669   23443 main.go:141] libmachine: (ha-029113) DBG | Closing plugin on server side
	I0717 00:44:21.485674   23443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:44:21.485677   23443 main.go:141] libmachine: Making call to close driver server
	I0717 00:44:21.485684   23443 main.go:141] libmachine: Making call to close driver server
	I0717 00:44:21.485694   23443 main.go:141] libmachine: (ha-029113) Calling .Close
	I0717 00:44:21.485685   23443 main.go:141] libmachine: (ha-029113) Calling .Close
	I0717 00:44:21.485953   23443 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:44:21.485967   23443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:44:21.486100   23443 main.go:141] libmachine: (ha-029113) DBG | Closing plugin on server side
	I0717 00:44:21.486103   23443 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 00:44:21.486121   23443 round_trippers.go:469] Request Headers:
	I0717 00:44:21.486133   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:44:21.486147   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:44:21.486160   23443 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:44:21.486187   23443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:44:21.498880   23443 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0717 00:44:21.499627   23443 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0717 00:44:21.499646   23443 round_trippers.go:469] Request Headers:
	I0717 00:44:21.499657   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:44:21.499665   23443 round_trippers.go:473]     Content-Type: application/json
	I0717 00:44:21.499674   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:44:21.502894   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:44:21.503135   23443 main.go:141] libmachine: Making call to close driver server
	I0717 00:44:21.503156   23443 main.go:141] libmachine: (ha-029113) Calling .Close
	I0717 00:44:21.503406   23443 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:44:21.503463   23443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:44:21.503433   23443 main.go:141] libmachine: (ha-029113) DBG | Closing plugin on server side
	I0717 00:44:21.505002   23443 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 00:44:21.506268   23443 addons.go:510] duration metric: took 925.075935ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 00:44:21.506300   23443 start.go:246] waiting for cluster config update ...
	I0717 00:44:21.506313   23443 start.go:255] writing updated cluster config ...
	I0717 00:44:21.507911   23443 out.go:177] 
	I0717 00:44:21.509205   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:44:21.509268   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:44:21.510815   23443 out.go:177] * Starting "ha-029113-m02" control-plane node in "ha-029113" cluster
	I0717 00:44:21.512134   23443 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:44:21.512152   23443 cache.go:56] Caching tarball of preloaded images
	I0717 00:44:21.512247   23443 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:44:21.512260   23443 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:44:21.512317   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:44:21.512452   23443 start.go:360] acquireMachinesLock for ha-029113-m02: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:44:21.512490   23443 start.go:364] duration metric: took 20.915µs to acquireMachinesLock for "ha-029113-m02"
	I0717 00:44:21.512512   23443 start.go:93] Provisioning new machine with config: &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:44:21.512578   23443 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0717 00:44:21.513984   23443 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:44:21.514056   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:21.514083   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:21.528451   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0717 00:44:21.528886   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:21.529301   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:21.529313   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:21.529577   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:21.529751   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetMachineName
	I0717 00:44:21.529917   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:21.530055   23443 start.go:159] libmachine.API.Create for "ha-029113" (driver="kvm2")
	I0717 00:44:21.530084   23443 client.go:168] LocalClient.Create starting
	I0717 00:44:21.530116   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 00:44:21.530153   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:44:21.530173   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:44:21.530248   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 00:44:21.530276   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:44:21.530294   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:44:21.530320   23443 main.go:141] libmachine: Running pre-create checks...
	I0717 00:44:21.530331   23443 main.go:141] libmachine: (ha-029113-m02) Calling .PreCreateCheck
	I0717 00:44:21.530479   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetConfigRaw
	I0717 00:44:21.530835   23443 main.go:141] libmachine: Creating machine...
	I0717 00:44:21.530849   23443 main.go:141] libmachine: (ha-029113-m02) Calling .Create
	I0717 00:44:21.531028   23443 main.go:141] libmachine: (ha-029113-m02) Creating KVM machine...
	I0717 00:44:21.532150   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found existing default KVM network
	I0717 00:44:21.532268   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found existing private KVM network mk-ha-029113
	I0717 00:44:21.532406   23443 main.go:141] libmachine: (ha-029113-m02) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02 ...
	I0717 00:44:21.532429   23443 main.go:141] libmachine: (ha-029113-m02) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 00:44:21.532470   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:21.532393   23838 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:44:21.532543   23443 main.go:141] libmachine: (ha-029113-m02) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 00:44:21.765492   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:21.765372   23838 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa...
	I0717 00:44:21.922150   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:21.922049   23838 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/ha-029113-m02.rawdisk...
	I0717 00:44:21.922172   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Writing magic tar header
	I0717 00:44:21.922181   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Writing SSH key tar header
	I0717 00:44:21.922240   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:21.922175   23838 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02 ...
	I0717 00:44:21.922295   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02
	I0717 00:44:21.922312   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02 (perms=drwx------)
	I0717 00:44:21.922339   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 00:44:21.922354   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:44:21.922366   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:44:21.922378   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 00:44:21.922386   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:44:21.922395   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:44:21.922400   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home
	I0717 00:44:21.922412   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Skipping /home - not owner
	I0717 00:44:21.922435   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 00:44:21.922457   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 00:44:21.922477   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:44:21.922493   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:44:21.922509   23443 main.go:141] libmachine: (ha-029113-m02) Creating domain...
	I0717 00:44:21.923553   23443 main.go:141] libmachine: (ha-029113-m02) define libvirt domain using xml: 
	I0717 00:44:21.923570   23443 main.go:141] libmachine: (ha-029113-m02) <domain type='kvm'>
	I0717 00:44:21.923580   23443 main.go:141] libmachine: (ha-029113-m02)   <name>ha-029113-m02</name>
	I0717 00:44:21.923588   23443 main.go:141] libmachine: (ha-029113-m02)   <memory unit='MiB'>2200</memory>
	I0717 00:44:21.923599   23443 main.go:141] libmachine: (ha-029113-m02)   <vcpu>2</vcpu>
	I0717 00:44:21.923609   23443 main.go:141] libmachine: (ha-029113-m02)   <features>
	I0717 00:44:21.923618   23443 main.go:141] libmachine: (ha-029113-m02)     <acpi/>
	I0717 00:44:21.923628   23443 main.go:141] libmachine: (ha-029113-m02)     <apic/>
	I0717 00:44:21.923637   23443 main.go:141] libmachine: (ha-029113-m02)     <pae/>
	I0717 00:44:21.923647   23443 main.go:141] libmachine: (ha-029113-m02)     
	I0717 00:44:21.923653   23443 main.go:141] libmachine: (ha-029113-m02)   </features>
	I0717 00:44:21.923663   23443 main.go:141] libmachine: (ha-029113-m02)   <cpu mode='host-passthrough'>
	I0717 00:44:21.923690   23443 main.go:141] libmachine: (ha-029113-m02)   
	I0717 00:44:21.923711   23443 main.go:141] libmachine: (ha-029113-m02)   </cpu>
	I0717 00:44:21.923721   23443 main.go:141] libmachine: (ha-029113-m02)   <os>
	I0717 00:44:21.923730   23443 main.go:141] libmachine: (ha-029113-m02)     <type>hvm</type>
	I0717 00:44:21.923739   23443 main.go:141] libmachine: (ha-029113-m02)     <boot dev='cdrom'/>
	I0717 00:44:21.923750   23443 main.go:141] libmachine: (ha-029113-m02)     <boot dev='hd'/>
	I0717 00:44:21.923771   23443 main.go:141] libmachine: (ha-029113-m02)     <bootmenu enable='no'/>
	I0717 00:44:21.923784   23443 main.go:141] libmachine: (ha-029113-m02)   </os>
	I0717 00:44:21.923794   23443 main.go:141] libmachine: (ha-029113-m02)   <devices>
	I0717 00:44:21.923804   23443 main.go:141] libmachine: (ha-029113-m02)     <disk type='file' device='cdrom'>
	I0717 00:44:21.923820   23443 main.go:141] libmachine: (ha-029113-m02)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/boot2docker.iso'/>
	I0717 00:44:21.923831   23443 main.go:141] libmachine: (ha-029113-m02)       <target dev='hdc' bus='scsi'/>
	I0717 00:44:21.923843   23443 main.go:141] libmachine: (ha-029113-m02)       <readonly/>
	I0717 00:44:21.923854   23443 main.go:141] libmachine: (ha-029113-m02)     </disk>
	I0717 00:44:21.923866   23443 main.go:141] libmachine: (ha-029113-m02)     <disk type='file' device='disk'>
	I0717 00:44:21.923877   23443 main.go:141] libmachine: (ha-029113-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:44:21.923890   23443 main.go:141] libmachine: (ha-029113-m02)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/ha-029113-m02.rawdisk'/>
	I0717 00:44:21.923901   23443 main.go:141] libmachine: (ha-029113-m02)       <target dev='hda' bus='virtio'/>
	I0717 00:44:21.923913   23443 main.go:141] libmachine: (ha-029113-m02)     </disk>
	I0717 00:44:21.923923   23443 main.go:141] libmachine: (ha-029113-m02)     <interface type='network'>
	I0717 00:44:21.923932   23443 main.go:141] libmachine: (ha-029113-m02)       <source network='mk-ha-029113'/>
	I0717 00:44:21.923944   23443 main.go:141] libmachine: (ha-029113-m02)       <model type='virtio'/>
	I0717 00:44:21.923955   23443 main.go:141] libmachine: (ha-029113-m02)     </interface>
	I0717 00:44:21.923964   23443 main.go:141] libmachine: (ha-029113-m02)     <interface type='network'>
	I0717 00:44:21.923970   23443 main.go:141] libmachine: (ha-029113-m02)       <source network='default'/>
	I0717 00:44:21.923980   23443 main.go:141] libmachine: (ha-029113-m02)       <model type='virtio'/>
	I0717 00:44:21.923991   23443 main.go:141] libmachine: (ha-029113-m02)     </interface>
	I0717 00:44:21.923999   23443 main.go:141] libmachine: (ha-029113-m02)     <serial type='pty'>
	I0717 00:44:21.924019   23443 main.go:141] libmachine: (ha-029113-m02)       <target port='0'/>
	I0717 00:44:21.924037   23443 main.go:141] libmachine: (ha-029113-m02)     </serial>
	I0717 00:44:21.924050   23443 main.go:141] libmachine: (ha-029113-m02)     <console type='pty'>
	I0717 00:44:21.924061   23443 main.go:141] libmachine: (ha-029113-m02)       <target type='serial' port='0'/>
	I0717 00:44:21.924076   23443 main.go:141] libmachine: (ha-029113-m02)     </console>
	I0717 00:44:21.924087   23443 main.go:141] libmachine: (ha-029113-m02)     <rng model='virtio'>
	I0717 00:44:21.924096   23443 main.go:141] libmachine: (ha-029113-m02)       <backend model='random'>/dev/random</backend>
	I0717 00:44:21.924106   23443 main.go:141] libmachine: (ha-029113-m02)     </rng>
	I0717 00:44:21.924115   23443 main.go:141] libmachine: (ha-029113-m02)     
	I0717 00:44:21.924121   23443 main.go:141] libmachine: (ha-029113-m02)     
	I0717 00:44:21.924128   23443 main.go:141] libmachine: (ha-029113-m02)   </devices>
	I0717 00:44:21.924134   23443 main.go:141] libmachine: (ha-029113-m02) </domain>
	I0717 00:44:21.924140   23443 main.go:141] libmachine: (ha-029113-m02) 
	I0717 00:44:21.930425   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:a0:6d:db in network default
	I0717 00:44:21.930927   23443 main.go:141] libmachine: (ha-029113-m02) Ensuring networks are active...
	I0717 00:44:21.930944   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:21.931531   23443 main.go:141] libmachine: (ha-029113-m02) Ensuring network default is active
	I0717 00:44:21.931817   23443 main.go:141] libmachine: (ha-029113-m02) Ensuring network mk-ha-029113 is active
	I0717 00:44:21.932164   23443 main.go:141] libmachine: (ha-029113-m02) Getting domain xml...
	I0717 00:44:21.932753   23443 main.go:141] libmachine: (ha-029113-m02) Creating domain...
	I0717 00:44:23.126388   23443 main.go:141] libmachine: (ha-029113-m02) Waiting to get IP...
	I0717 00:44:23.127189   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:23.127582   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:23.127605   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:23.127566   23838 retry.go:31] will retry after 306.500754ms: waiting for machine to come up
	I0717 00:44:23.436071   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:23.436493   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:23.436520   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:23.436452   23838 retry.go:31] will retry after 297.727134ms: waiting for machine to come up
	I0717 00:44:23.735908   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:23.736335   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:23.736363   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:23.736293   23838 retry.go:31] will retry after 313.394137ms: waiting for machine to come up
	I0717 00:44:24.051746   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:24.052195   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:24.052223   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:24.052166   23838 retry.go:31] will retry after 561.781093ms: waiting for machine to come up
	I0717 00:44:24.615446   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:24.615952   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:24.615975   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:24.615908   23838 retry.go:31] will retry after 656.549737ms: waiting for machine to come up
	I0717 00:44:25.273656   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:25.273998   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:25.274019   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:25.273966   23838 retry.go:31] will retry after 750.278987ms: waiting for machine to come up
	I0717 00:44:26.025760   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:26.026236   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:26.026257   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:26.026209   23838 retry.go:31] will retry after 963.408722ms: waiting for machine to come up
	I0717 00:44:26.991510   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:26.991951   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:26.992003   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:26.991922   23838 retry.go:31] will retry after 968.074979ms: waiting for machine to come up
	I0717 00:44:27.961278   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:27.961695   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:27.961730   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:27.961649   23838 retry.go:31] will retry after 1.855272264s: waiting for machine to come up
	I0717 00:44:29.819666   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:29.820060   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:29.820104   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:29.820014   23838 retry.go:31] will retry after 1.882719972s: waiting for machine to come up
	I0717 00:44:31.704098   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:31.704494   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:31.704523   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:31.704445   23838 retry.go:31] will retry after 2.138087395s: waiting for machine to come up
	I0717 00:44:33.843885   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:33.844361   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:33.844378   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:33.844328   23838 retry.go:31] will retry after 2.441061484s: waiting for machine to come up
	I0717 00:44:36.288764   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:36.289090   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:36.289114   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:36.289064   23838 retry.go:31] will retry after 2.940582098s: waiting for machine to come up
	I0717 00:44:39.233237   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:39.233595   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:39.233619   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:39.233567   23838 retry.go:31] will retry after 5.314621397s: waiting for machine to come up
	I0717 00:44:44.549835   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.550210   23443 main.go:141] libmachine: (ha-029113-m02) Found IP for machine: 192.168.39.166
	I0717 00:44:44.550236   23443 main.go:141] libmachine: (ha-029113-m02) Reserving static IP address...
	I0717 00:44:44.550250   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has current primary IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.550599   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find host DHCP lease matching {name: "ha-029113-m02", mac: "52:54:00:57:08:5b", ip: "192.168.39.166"} in network mk-ha-029113
	I0717 00:44:44.619403   23443 main.go:141] libmachine: (ha-029113-m02) Reserved static IP address: 192.168.39.166
	I0717 00:44:44.619427   23443 main.go:141] libmachine: (ha-029113-m02) Waiting for SSH to be available...
	I0717 00:44:44.619436   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Getting to WaitForSSH function...
	I0717 00:44:44.621871   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.622215   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:minikube Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:44.622241   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.622389   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Using SSH client type: external
	I0717 00:44:44.622414   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa (-rw-------)
	I0717 00:44:44.622442   23443 main.go:141] libmachine: (ha-029113-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:44:44.622455   23443 main.go:141] libmachine: (ha-029113-m02) DBG | About to run SSH command:
	I0717 00:44:44.622467   23443 main.go:141] libmachine: (ha-029113-m02) DBG | exit 0
	I0717 00:44:44.754376   23443 main.go:141] libmachine: (ha-029113-m02) DBG | SSH cmd err, output: <nil>: 
	I0717 00:44:44.754594   23443 main.go:141] libmachine: (ha-029113-m02) KVM machine creation complete!
	I0717 00:44:44.754938   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetConfigRaw
	I0717 00:44:44.755465   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:44.755620   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:44.755740   23443 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:44:44.755753   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:44:44.757016   23443 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:44:44.757026   23443 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:44:44.757033   23443 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:44:44.757038   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:44.759322   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.759651   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:44.759678   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.759829   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:44.760022   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.760202   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.760352   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:44.760520   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:44.760749   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:44.760761   23443 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:44:44.873910   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:44:44.873931   23443 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:44:44.873937   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:44.876534   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.876879   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:44.876904   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.877036   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:44.877219   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.877369   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.877502   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:44.877652   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:44.877812   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:44.877822   23443 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:44:44.991371   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:44:44.991435   23443 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:44:44.991444   23443 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:44:44.991456   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetMachineName
	I0717 00:44:44.991672   23443 buildroot.go:166] provisioning hostname "ha-029113-m02"
	I0717 00:44:44.991701   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetMachineName
	I0717 00:44:44.991897   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:44.994066   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.994457   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:44.994482   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.994602   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:44.994757   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.994909   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.995065   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:44.995201   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:44.995355   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:44.995366   23443 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-029113-m02 && echo "ha-029113-m02" | sudo tee /etc/hostname
	I0717 00:44:45.121167   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-029113-m02
	
	I0717 00:44:45.121194   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.123822   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.124130   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.124151   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.124376   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:45.124579   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.124736   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.124907   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:45.125056   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:45.125227   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:45.125248   23443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-029113-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-029113-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-029113-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:44:45.247111   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:44:45.247142   23443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 00:44:45.247156   23443 buildroot.go:174] setting up certificates
	I0717 00:44:45.247166   23443 provision.go:84] configureAuth start
	I0717 00:44:45.247174   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetMachineName
	I0717 00:44:45.247435   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:44:45.249911   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.250229   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.250248   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.250396   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.252384   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.252705   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.252731   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.252831   23443 provision.go:143] copyHostCerts
	I0717 00:44:45.252867   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:44:45.252906   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 00:44:45.252920   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:44:45.253000   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 00:44:45.253079   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:44:45.253096   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 00:44:45.253103   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:44:45.253127   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 00:44:45.253170   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:44:45.253191   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 00:44:45.253199   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:44:45.253231   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 00:44:45.253298   23443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.ha-029113-m02 san=[127.0.0.1 192.168.39.166 ha-029113-m02 localhost minikube]
	I0717 00:44:45.367486   23443 provision.go:177] copyRemoteCerts
	I0717 00:44:45.367538   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:44:45.367560   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.370013   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.370345   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.370381   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.370536   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:45.370734   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.370903   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:45.371017   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	I0717 00:44:45.461167   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:44:45.461229   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:44:45.485049   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:44:45.485112   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:44:45.508303   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:44:45.508387   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:44:45.531564   23443 provision.go:87] duration metric: took 284.384948ms to configureAuth
	I0717 00:44:45.531592   23443 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:44:45.531797   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:44:45.531875   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.534512   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.534941   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.534970   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.535160   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:45.535346   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.535524   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.535686   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:45.535844   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:45.536052   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:45.536085   23443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:44:45.806422   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:44:45.806448   23443 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:44:45.806458   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetURL
	I0717 00:44:45.807725   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Using libvirt version 6000000
	I0717 00:44:45.809981   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.810324   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.810348   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.810541   23443 main.go:141] libmachine: Docker is up and running!
	I0717 00:44:45.810569   23443 main.go:141] libmachine: Reticulating splines...
	I0717 00:44:45.810578   23443 client.go:171] duration metric: took 24.280485852s to LocalClient.Create
	I0717 00:44:45.810601   23443 start.go:167] duration metric: took 24.280544833s to libmachine.API.Create "ha-029113"
	I0717 00:44:45.810611   23443 start.go:293] postStartSetup for "ha-029113-m02" (driver="kvm2")
	I0717 00:44:45.810619   23443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:44:45.810635   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:45.810871   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:44:45.810896   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.813010   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.813352   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.813372   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.813564   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:45.813759   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.813918   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:45.814075   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	I0717 00:44:45.901434   23443 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:44:45.905704   23443 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:44:45.905724   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 00:44:45.905775   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 00:44:45.905840   23443 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 00:44:45.905849   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /etc/ssl/certs/112592.pem
	I0717 00:44:45.905924   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:44:45.915672   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:44:45.938347   23443 start.go:296] duration metric: took 127.724614ms for postStartSetup
	I0717 00:44:45.938389   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetConfigRaw
	I0717 00:44:45.938915   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:44:45.941473   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.941818   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.941844   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.942090   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:44:45.942259   23443 start.go:128] duration metric: took 24.429673631s to createHost
	I0717 00:44:45.942279   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.944493   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.944885   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.944909   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.945027   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:45.945193   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.945299   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.945443   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:45.945569   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:45.945753   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:45.945765   23443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:44:46.059255   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177086.035700096
	
	I0717 00:44:46.059279   23443 fix.go:216] guest clock: 1721177086.035700096
	I0717 00:44:46.059289   23443 fix.go:229] Guest: 2024-07-17 00:44:46.035700096 +0000 UTC Remote: 2024-07-17 00:44:45.942268698 +0000 UTC m=+76.344543852 (delta=93.431398ms)
	I0717 00:44:46.059314   23443 fix.go:200] guest clock delta is within tolerance: 93.431398ms
	I0717 00:44:46.059319   23443 start.go:83] releasing machines lock for "ha-029113-m02", held for 24.546818872s
	I0717 00:44:46.059337   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:46.059590   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:44:46.062135   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.062416   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:46.062441   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.064914   23443 out.go:177] * Found network options:
	I0717 00:44:46.066490   23443 out.go:177]   - NO_PROXY=192.168.39.95
	W0717 00:44:46.067961   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:44:46.067994   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:46.068503   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:46.068668   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:46.068765   23443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:44:46.068802   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	W0717 00:44:46.068999   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:44:46.069064   23443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:44:46.069085   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:46.071597   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.071818   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.072006   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:46.072031   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.072154   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:46.072162   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:46.072181   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.072316   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:46.072367   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:46.072469   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:46.072548   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:46.072858   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:46.072857   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	I0717 00:44:46.073026   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	I0717 00:44:46.312405   23443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:44:46.318777   23443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:44:46.318828   23443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:44:46.334305   23443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:44:46.334321   23443 start.go:495] detecting cgroup driver to use...
	I0717 00:44:46.334378   23443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:44:46.349642   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:44:46.363703   23443 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:44:46.363741   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:44:46.377732   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:44:46.391523   23443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:44:46.511229   23443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:44:46.672516   23443 docker.go:233] disabling docker service ...
	I0717 00:44:46.672571   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:44:46.687542   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:44:46.701406   23443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:44:46.824789   23443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:44:46.940462   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:44:46.955830   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:44:46.974487   23443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:44:46.974541   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:46.984766   23443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:44:46.984828   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:46.994802   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:47.004509   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:47.014241   23443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:44:47.024510   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:47.034748   23443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:47.051448   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:47.061198   23443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:44:47.070140   23443 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:44:47.070187   23443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:44:47.083255   23443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:44:47.092470   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:44:47.206987   23443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:44:47.343140   23443 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:44:47.343196   23443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:44:47.348111   23443 start.go:563] Will wait 60s for crictl version
	I0717 00:44:47.348154   23443 ssh_runner.go:195] Run: which crictl
	I0717 00:44:47.351750   23443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:44:47.391937   23443 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:44:47.392030   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:44:47.418173   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:44:47.450323   23443 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:44:47.451753   23443 out.go:177]   - env NO_PROXY=192.168.39.95
	I0717 00:44:47.452947   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:44:47.455382   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:47.455715   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:47.455745   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:47.455939   23443 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:44:47.460382   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:44:47.473520   23443 mustload.go:65] Loading cluster: ha-029113
	I0717 00:44:47.473743   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:44:47.474009   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:47.474044   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:47.488577   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I0717 00:44:47.488983   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:47.489429   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:47.489453   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:47.489783   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:47.489987   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:44:47.491527   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:44:47.491848   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:47.491884   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:47.506250   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0717 00:44:47.506667   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:47.507096   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:47.507113   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:47.507387   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:47.507554   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:44:47.507703   23443 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113 for IP: 192.168.39.166
	I0717 00:44:47.507715   23443 certs.go:194] generating shared ca certs ...
	I0717 00:44:47.507727   23443 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:47.507847   23443 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 00:44:47.507881   23443 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 00:44:47.507889   23443 certs.go:256] generating profile certs ...
	I0717 00:44:47.507963   23443 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key
	I0717 00:44:47.507984   23443 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.9ce6be2b
	I0717 00:44:47.507997   23443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.9ce6be2b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.95 192.168.39.166 192.168.39.254]
	I0717 00:44:47.577327   23443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.9ce6be2b ...
	I0717 00:44:47.577354   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.9ce6be2b: {Name:mk3f595e3dd15d8a18c9e4b6cfe842899acd5768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:47.577527   23443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.9ce6be2b ...
	I0717 00:44:47.577546   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.9ce6be2b: {Name:mkb6a95690716dce45479bd0140a631685524c54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:47.577638   23443 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.9ce6be2b -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt
	I0717 00:44:47.577799   23443 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.9ce6be2b -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key
	I0717 00:44:47.577965   23443 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key
	I0717 00:44:47.577983   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:44:47.578000   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:44:47.578019   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:44:47.578037   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:44:47.578054   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:44:47.578069   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:44:47.578084   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:44:47.578105   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:44:47.578165   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 00:44:47.578205   23443 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 00:44:47.578217   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:44:47.578249   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:44:47.578277   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:44:47.578306   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 00:44:47.578360   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:44:47.578407   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /usr/share/ca-certificates/112592.pem
	I0717 00:44:47.578428   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:47.578444   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem -> /usr/share/ca-certificates/11259.pem
	I0717 00:44:47.578486   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:44:47.581366   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:47.581763   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:44:47.581793   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:47.581925   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:44:47.582099   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:44:47.582232   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:44:47.582369   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:44:47.650909   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 00:44:47.655905   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 00:44:47.669701   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 00:44:47.674392   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 00:44:47.685145   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 00:44:47.689313   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 00:44:47.699759   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 00:44:47.703880   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 00:44:47.714787   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 00:44:47.718807   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 00:44:47.730025   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 00:44:47.733952   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 00:44:47.744715   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:44:47.769348   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:44:47.791962   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:44:47.813987   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:44:47.836524   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 00:44:47.858849   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:44:47.882004   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:44:47.905053   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:44:47.927456   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 00:44:47.949565   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:44:47.971731   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 00:44:47.993759   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 00:44:48.011244   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 00:44:48.028584   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 00:44:48.046431   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 00:44:48.063901   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 00:44:48.081546   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 00:44:48.098796   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 00:44:48.116223   23443 ssh_runner.go:195] Run: openssl version
	I0717 00:44:48.121710   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 00:44:48.133256   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 00:44:48.137547   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 00:44:48.137590   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 00:44:48.143006   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:44:48.153361   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:44:48.163661   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:48.167728   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:48.167771   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:48.172985   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:44:48.183502   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 00:44:48.193639   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 00:44:48.198007   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 00:44:48.198051   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 00:44:48.203472   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 00:44:48.214228   23443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:44:48.218011   23443 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:44:48.218057   23443 kubeadm.go:934] updating node {m02 192.168.39.166 8443 v1.30.2 crio true true} ...
	I0717 00:44:48.218124   23443 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-029113-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:44:48.218145   23443 kube-vip.go:115] generating kube-vip config ...
	I0717 00:44:48.218170   23443 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:44:48.235840   23443 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:44:48.235918   23443 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:44:48.235966   23443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:44:48.245971   23443 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 00:44:48.246012   23443 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 00:44:48.256116   23443 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 00:44:48.256148   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:44:48.256183   23443 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0717 00:44:48.256205   23443 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0717 00:44:48.256217   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:44:48.260498   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 00:44:48.260520   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 00:45:30.425423   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:45:30.425504   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:45:30.432852   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 00:45:30.432882   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 00:46:16.676403   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:46:16.692621   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:46:16.692752   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:46:16.697375   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 00:46:16.697402   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 00:46:17.071538   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 00:46:17.081503   23443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 00:46:17.099148   23443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:46:17.116667   23443 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:46:17.133894   23443 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:46:17.138280   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:46:17.151248   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:46:17.272941   23443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:46:17.290512   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:46:17.290911   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:46:17.290948   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:46:17.306307   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0717 00:46:17.306772   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:46:17.307306   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:46:17.307333   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:46:17.307632   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:46:17.307815   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:46:17.307973   23443 start.go:317] joinCluster: &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:46:17.308077   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 00:46:17.308091   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:46:17.311008   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:46:17.311389   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:46:17.311411   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:46:17.311609   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:46:17.311866   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:46:17.312017   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:46:17.312170   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:46:17.473835   23443 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:46:17.473894   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qxsifa.szb8lo03p23cph9a --discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-029113-m02 --control-plane --apiserver-advertise-address=192.168.39.166 --apiserver-bind-port=8443"
	I0717 00:46:39.394930   23443 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qxsifa.szb8lo03p23cph9a --discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-029113-m02 --control-plane --apiserver-advertise-address=192.168.39.166 --apiserver-bind-port=8443": (21.920994841s)
	I0717 00:46:39.394975   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 00:46:39.825420   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-029113-m02 minikube.k8s.io/updated_at=2024_07_17T00_46_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=ha-029113 minikube.k8s.io/primary=false
	I0717 00:46:39.944995   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-029113-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 00:46:40.064513   23443 start.go:319] duration metric: took 22.75653534s to joinCluster
	I0717 00:46:40.064615   23443 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:46:40.064937   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:46:40.066305   23443 out.go:177] * Verifying Kubernetes components...
	I0717 00:46:40.067294   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:46:40.254167   23443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:46:40.283487   23443 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:46:40.283835   23443 kapi.go:59] client config for ha-029113: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt", KeyFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key", CAFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 00:46:40.283927   23443 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.95:8443
	I0717 00:46:40.284215   23443 node_ready.go:35] waiting up to 6m0s for node "ha-029113-m02" to be "Ready" ...
	I0717 00:46:40.284345   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:40.284358   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:40.284371   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:40.284375   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:40.298345   23443 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0717 00:46:40.785405   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:40.785428   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:40.785437   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:40.785441   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:40.789173   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:41.285260   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:41.285283   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:41.285293   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:41.285298   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:41.289165   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:41.784508   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:41.784533   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:41.784540   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:41.784546   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:41.787864   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:42.285159   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:42.285186   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:42.285196   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:42.285201   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:42.288243   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:42.288953   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:42.784846   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:42.784883   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:42.784897   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:42.784902   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:42.789853   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:46:43.284594   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:43.284618   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:43.284628   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:43.284633   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:43.288162   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:43.785076   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:43.785096   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:43.785105   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:43.785108   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:43.789071   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:44.284682   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:44.284702   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:44.284709   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:44.284714   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:44.288040   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:44.784655   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:44.784675   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:44.784683   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:44.784686   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:44.787807   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:44.788647   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:45.285188   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:45.285214   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:45.285222   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:45.285226   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:45.288258   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:45.785228   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:45.785250   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:45.785258   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:45.785262   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:45.788864   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:46.285064   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:46.285086   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:46.285096   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:46.285104   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:46.288877   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:46.785321   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:46.785345   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:46.785356   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:46.785365   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:46.788427   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:46.789072   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:47.284430   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:47.284456   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:47.284466   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:47.284471   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:47.287994   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:47.785131   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:47.785152   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:47.785159   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:47.785163   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:47.788266   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:48.285203   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:48.285222   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:48.285229   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:48.285234   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:48.288790   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:48.784460   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:48.784482   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:48.784490   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:48.784495   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:48.787573   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:49.284601   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:49.284622   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:49.284634   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:49.284643   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:49.288480   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:49.289249   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:49.785350   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:49.785373   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:49.785384   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:49.785392   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:49.788492   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:50.285416   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:50.285437   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:50.285445   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:50.285450   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:50.288808   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:50.785052   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:50.785072   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:50.785080   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:50.785086   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:50.788606   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:51.285137   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:51.285159   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:51.285167   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:51.285171   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:51.288279   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:51.784648   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:51.784668   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:51.784677   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:51.784682   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:51.787854   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:51.788548   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:52.284844   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:52.284865   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:52.284873   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:52.284877   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:52.288326   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:52.784372   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:52.784393   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:52.784404   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:52.784407   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:52.787594   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:53.284770   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:53.284788   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:53.284797   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:53.284800   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:53.287700   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:46:53.784806   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:53.784831   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:53.784843   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:53.784850   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:53.788358   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:53.788974   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:54.284992   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:54.285014   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:54.285023   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:54.285028   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:54.288147   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:54.784702   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:54.784724   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:54.784731   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:54.784737   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:54.788084   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:55.285150   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:55.285180   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:55.285190   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:55.285195   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:55.288527   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:55.785452   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:55.785473   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:55.785481   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:55.785486   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:55.788704   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:55.789401   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:56.284802   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:56.284821   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:56.284830   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:56.284835   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:56.288441   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:56.784811   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:56.784837   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:56.784848   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:56.784854   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:56.788360   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:57.284771   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:57.284793   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:57.284801   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:57.284805   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:57.288469   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:57.784918   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:57.784943   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:57.784955   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:57.784963   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:57.787851   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:46:58.284624   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:58.284648   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:58.284658   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:58.284664   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:58.287842   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:58.288449   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:58.784857   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:58.784876   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:58.784883   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:58.784887   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:58.787484   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:46:59.284489   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:59.284509   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:59.284516   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:59.284520   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:59.287792   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:59.784365   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:59.784395   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:59.784403   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:59.784408   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:59.787927   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:00.285086   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:00.285110   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.285117   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.285121   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.288385   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:00.288926   23443 node_ready.go:49] node "ha-029113-m02" has status "Ready":"True"
	I0717 00:47:00.288943   23443 node_ready.go:38] duration metric: took 20.004703741s for node "ha-029113-m02" to be "Ready" ...
	I0717 00:47:00.288950   23443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:47:00.289029   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:47:00.289037   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.289045   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.289050   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.296020   23443 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:47:00.302225   23443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.302297   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-62m67
	I0717 00:47:00.302309   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.302319   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.302327   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.305104   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.305672   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:00.305685   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.305692   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.305696   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.308163   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.308719   23443 pod_ready.go:92] pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:00.308733   23443 pod_ready.go:81] duration metric: took 6.486043ms for pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.308741   23443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.308788   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdlls
	I0717 00:47:00.308795   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.308802   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.308805   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.311143   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.311613   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:00.311626   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.311632   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.311636   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.313674   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.314129   23443 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:00.314143   23443 pod_ready.go:81] duration metric: took 5.396922ms for pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.314150   23443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.314186   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113
	I0717 00:47:00.314193   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.314199   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.314204   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.316320   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.316917   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:00.316928   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.316934   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.316937   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.319330   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.319788   23443 pod_ready.go:92] pod "etcd-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:00.319802   23443 pod_ready.go:81] duration metric: took 5.646782ms for pod "etcd-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.319808   23443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.319852   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113-m02
	I0717 00:47:00.319862   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.319871   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.319878   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.322504   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.323427   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:00.323439   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.323446   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.323450   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.325614   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.326018   23443 pod_ready.go:92] pod "etcd-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:00.326035   23443 pod_ready.go:81] duration metric: took 6.219819ms for pod "etcd-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.326048   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.485438   23443 request.go:629] Waited for 159.341918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113
	I0717 00:47:00.485524   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113
	I0717 00:47:00.485534   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.485542   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.485549   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.489065   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:00.685968   23443 request.go:629] Waited for 196.009264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:00.686028   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:00.686046   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.686055   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.686060   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.689388   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:00.689984   23443 pod_ready.go:92] pod "kube-apiserver-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:00.689999   23443 pod_ready.go:81] duration metric: took 363.94506ms for pod "kube-apiserver-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.690009   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.885313   23443 request.go:629] Waited for 195.246505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m02
	I0717 00:47:00.885373   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m02
	I0717 00:47:00.885378   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.885383   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.885386   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.888552   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.085428   23443 request.go:629] Waited for 196.22971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:01.085503   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:01.085508   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:01.085516   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:01.085519   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:01.089022   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.089673   23443 pod_ready.go:92] pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:01.089690   23443 pod_ready.go:81] duration metric: took 399.675191ms for pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:01.089699   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:01.285788   23443 request.go:629] Waited for 196.037905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113
	I0717 00:47:01.285850   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113
	I0717 00:47:01.285858   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:01.285868   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:01.285875   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:01.288963   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.485823   23443 request.go:629] Waited for 196.363674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:01.485905   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:01.485913   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:01.485923   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:01.485932   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:01.489211   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.489725   23443 pod_ready.go:92] pod "kube-controller-manager-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:01.489750   23443 pod_ready.go:81] duration metric: took 400.046262ms for pod "kube-controller-manager-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:01.489760   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:01.685086   23443 request.go:629] Waited for 195.254717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m02
	I0717 00:47:01.685161   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m02
	I0717 00:47:01.685170   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:01.685178   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:01.685183   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:01.688673   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.885694   23443 request.go:629] Waited for 196.329757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:01.885755   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:01.885760   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:01.885767   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:01.885772   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:01.888957   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.889401   23443 pod_ready.go:92] pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:01.889418   23443 pod_ready.go:81] duration metric: took 399.652066ms for pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:01.889427   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wz5p" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:02.085632   23443 request.go:629] Waited for 196.139901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wz5p
	I0717 00:47:02.085691   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wz5p
	I0717 00:47:02.085698   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:02.085707   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:02.085714   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:02.089129   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:02.286065   23443 request.go:629] Waited for 196.382564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:02.286129   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:02.286137   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:02.286146   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:02.286153   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:02.289793   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:02.290283   23443 pod_ready.go:92] pod "kube-proxy-2wz5p" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:02.290308   23443 pod_ready.go:81] duration metric: took 400.873927ms for pod "kube-proxy-2wz5p" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:02.290322   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hg2kp" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:02.485968   23443 request.go:629] Waited for 195.585298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2kp
	I0717 00:47:02.486038   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2kp
	I0717 00:47:02.486044   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:02.486051   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:02.486054   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:02.489411   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:02.685828   23443 request.go:629] Waited for 195.861626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:02.685879   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:02.685884   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:02.685892   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:02.685895   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:02.689465   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:02.689972   23443 pod_ready.go:92] pod "kube-proxy-hg2kp" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:02.689993   23443 pod_ready.go:81] duration metric: took 399.664283ms for pod "kube-proxy-hg2kp" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:02.690002   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:02.885138   23443 request.go:629] Waited for 195.073995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113
	I0717 00:47:02.885208   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113
	I0717 00:47:02.885215   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:02.885230   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:02.885239   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:02.888801   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:03.085815   23443 request.go:629] Waited for 196.390923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:03.085861   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:03.085866   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.085875   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.085881   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.089147   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:03.089749   23443 pod_ready.go:92] pod "kube-scheduler-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:03.089775   23443 pod_ready.go:81] duration metric: took 399.765556ms for pod "kube-scheduler-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:03.089789   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:03.285832   23443 request.go:629] Waited for 195.977772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m02
	I0717 00:47:03.285902   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m02
	I0717 00:47:03.285909   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.285918   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.285935   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.289075   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:03.485171   23443 request.go:629] Waited for 195.292447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:03.485219   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:03.485224   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.485231   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.485235   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.488367   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:03.488968   23443 pod_ready.go:92] pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:03.488991   23443 pod_ready.go:81] duration metric: took 399.189538ms for pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:03.489003   23443 pod_ready.go:38] duration metric: took 3.200018447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:47:03.489020   23443 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:47:03.489081   23443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:47:03.508331   23443 api_server.go:72] duration metric: took 23.443679601s to wait for apiserver process to appear ...
	I0717 00:47:03.508351   23443 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:47:03.508367   23443 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0717 00:47:03.512924   23443 api_server.go:279] https://192.168.39.95:8443/healthz returned 200:
	ok
	I0717 00:47:03.512977   23443 round_trippers.go:463] GET https://192.168.39.95:8443/version
	I0717 00:47:03.512984   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.512998   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.513006   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.513923   23443 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 00:47:03.514022   23443 api_server.go:141] control plane version: v1.30.2
	I0717 00:47:03.514040   23443 api_server.go:131] duration metric: took 5.683875ms to wait for apiserver health ...
	I0717 00:47:03.514049   23443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:47:03.685451   23443 request.go:629] Waited for 171.349564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:47:03.685523   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:47:03.685532   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.685540   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.685547   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.692926   23443 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:47:03.697988   23443 system_pods.go:59] 17 kube-system pods found
	I0717 00:47:03.698021   23443 system_pods.go:61] "coredns-7db6d8ff4d-62m67" [5029f9dc-6792-44d9-9296-ec5ab6d72274] Running
	I0717 00:47:03.698028   23443 system_pods.go:61] "coredns-7db6d8ff4d-xdlls" [4344b971-b979-42f8-8fa8-01f2d64bb51a] Running
	I0717 00:47:03.698031   23443 system_pods.go:61] "etcd-ha-029113" [10122569-9dc1-4680-8d11-aa7d4c719cec] Running
	I0717 00:47:03.698035   23443 system_pods.go:61] "etcd-ha-029113-m02" [a0f65752-ddcf-493d-bc0b-e4cb2ac8d635] Running
	I0717 00:47:03.698038   23443 system_pods.go:61] "kindnet-8xg7d" [a612c634-49ef-4357-9b36-f5cc6604bdd7] Running
	I0717 00:47:03.698041   23443 system_pods.go:61] "kindnet-k7vzq" [8198e4a4-080e-482a-a0b3-58e796bdd230] Running
	I0717 00:47:03.698044   23443 system_pods.go:61] "kube-apiserver-ha-029113" [167d337c-6406-4f80-8a60-aebdca26066b] Running
	I0717 00:47:03.698047   23443 system_pods.go:61] "kube-apiserver-ha-029113-m02" [d64aa0f0-e41f-4a5e-b4fe-48665061673e] Running
	I0717 00:47:03.698050   23443 system_pods.go:61] "kube-controller-manager-ha-029113" [8f1ee225-f6a3-4943-976a-9cc14607a654] Running
	I0717 00:47:03.698057   23443 system_pods.go:61] "kube-controller-manager-ha-029113-m02" [d180826c-b18e-49a7-8a1a-576c1a64fd51] Running
	I0717 00:47:03.698060   23443 system_pods.go:61] "kube-proxy-2wz5p" [285b947d-fa11-40fb-befa-1fa4451787d4] Running
	I0717 00:47:03.698063   23443 system_pods.go:61] "kube-proxy-hg2kp" [db9243f4-bcc0-406a-a8f2-ccdbc00f6341] Running
	I0717 00:47:03.698066   23443 system_pods.go:61] "kube-scheduler-ha-029113" [e3b5629d-5647-437e-a87c-0c91f2cd26d7] Running
	I0717 00:47:03.698068   23443 system_pods.go:61] "kube-scheduler-ha-029113-m02" [0f986464-8d17-4727-906b-4d8c58afbe5d] Running
	I0717 00:47:03.698071   23443 system_pods.go:61] "kube-vip-ha-029113" [985763eb-2a45-4820-a3db-e2af6d9291e0] Running
	I0717 00:47:03.698074   23443 system_pods.go:61] "kube-vip-ha-029113-m02" [0d64dace-cdb3-4abb-8d92-b205dc611777] Running
	I0717 00:47:03.698077   23443 system_pods.go:61] "storage-provisioner" [b9f04e5d-469e-4432-bd31-dbe772194f84] Running
	I0717 00:47:03.698082   23443 system_pods.go:74] duration metric: took 184.028654ms to wait for pod list to return data ...
	I0717 00:47:03.698092   23443 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:47:03.885527   23443 request.go:629] Waited for 187.360853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:47:03.885587   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:47:03.885592   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.885600   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.885604   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.888683   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:03.888912   23443 default_sa.go:45] found service account: "default"
	I0717 00:47:03.888931   23443 default_sa.go:55] duration metric: took 190.833114ms for default service account to be created ...
	I0717 00:47:03.888939   23443 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:47:04.085295   23443 request.go:629] Waited for 196.304645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:47:04.085342   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:47:04.085348   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:04.085355   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:04.085359   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:04.090365   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:47:04.094886   23443 system_pods.go:86] 17 kube-system pods found
	I0717 00:47:04.094910   23443 system_pods.go:89] "coredns-7db6d8ff4d-62m67" [5029f9dc-6792-44d9-9296-ec5ab6d72274] Running
	I0717 00:47:04.094917   23443 system_pods.go:89] "coredns-7db6d8ff4d-xdlls" [4344b971-b979-42f8-8fa8-01f2d64bb51a] Running
	I0717 00:47:04.094921   23443 system_pods.go:89] "etcd-ha-029113" [10122569-9dc1-4680-8d11-aa7d4c719cec] Running
	I0717 00:47:04.094926   23443 system_pods.go:89] "etcd-ha-029113-m02" [a0f65752-ddcf-493d-bc0b-e4cb2ac8d635] Running
	I0717 00:47:04.094932   23443 system_pods.go:89] "kindnet-8xg7d" [a612c634-49ef-4357-9b36-f5cc6604bdd7] Running
	I0717 00:47:04.094936   23443 system_pods.go:89] "kindnet-k7vzq" [8198e4a4-080e-482a-a0b3-58e796bdd230] Running
	I0717 00:47:04.094939   23443 system_pods.go:89] "kube-apiserver-ha-029113" [167d337c-6406-4f80-8a60-aebdca26066b] Running
	I0717 00:47:04.094944   23443 system_pods.go:89] "kube-apiserver-ha-029113-m02" [d64aa0f0-e41f-4a5e-b4fe-48665061673e] Running
	I0717 00:47:04.094950   23443 system_pods.go:89] "kube-controller-manager-ha-029113" [8f1ee225-f6a3-4943-976a-9cc14607a654] Running
	I0717 00:47:04.094954   23443 system_pods.go:89] "kube-controller-manager-ha-029113-m02" [d180826c-b18e-49a7-8a1a-576c1a64fd51] Running
	I0717 00:47:04.094960   23443 system_pods.go:89] "kube-proxy-2wz5p" [285b947d-fa11-40fb-befa-1fa4451787d4] Running
	I0717 00:47:04.094965   23443 system_pods.go:89] "kube-proxy-hg2kp" [db9243f4-bcc0-406a-a8f2-ccdbc00f6341] Running
	I0717 00:47:04.094971   23443 system_pods.go:89] "kube-scheduler-ha-029113" [e3b5629d-5647-437e-a87c-0c91f2cd26d7] Running
	I0717 00:47:04.094975   23443 system_pods.go:89] "kube-scheduler-ha-029113-m02" [0f986464-8d17-4727-906b-4d8c58afbe5d] Running
	I0717 00:47:04.094982   23443 system_pods.go:89] "kube-vip-ha-029113" [985763eb-2a45-4820-a3db-e2af6d9291e0] Running
	I0717 00:47:04.094985   23443 system_pods.go:89] "kube-vip-ha-029113-m02" [0d64dace-cdb3-4abb-8d92-b205dc611777] Running
	I0717 00:47:04.094989   23443 system_pods.go:89] "storage-provisioner" [b9f04e5d-469e-4432-bd31-dbe772194f84] Running
	I0717 00:47:04.094994   23443 system_pods.go:126] duration metric: took 206.051848ms to wait for k8s-apps to be running ...
	I0717 00:47:04.095003   23443 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:47:04.095042   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:47:04.110570   23443 system_svc.go:56] duration metric: took 15.558256ms WaitForService to wait for kubelet
	I0717 00:47:04.110597   23443 kubeadm.go:582] duration metric: took 24.045945789s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:47:04.110617   23443 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:47:04.286015   23443 request.go:629] Waited for 175.332019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes
	I0717 00:47:04.286074   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes
	I0717 00:47:04.286091   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:04.286098   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:04.286105   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:04.289782   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:04.290663   23443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:47:04.290685   23443 node_conditions.go:123] node cpu capacity is 2
	I0717 00:47:04.290705   23443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:47:04.290709   23443 node_conditions.go:123] node cpu capacity is 2
	I0717 00:47:04.290713   23443 node_conditions.go:105] duration metric: took 180.091395ms to run NodePressure ...
	I0717 00:47:04.290725   23443 start.go:241] waiting for startup goroutines ...
	I0717 00:47:04.290767   23443 start.go:255] writing updated cluster config ...
	I0717 00:47:04.292762   23443 out.go:177] 
	I0717 00:47:04.294297   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:47:04.294405   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:47:04.296163   23443 out.go:177] * Starting "ha-029113-m03" control-plane node in "ha-029113" cluster
	I0717 00:47:04.297425   23443 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:47:04.297446   23443 cache.go:56] Caching tarball of preloaded images
	I0717 00:47:04.297538   23443 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:47:04.297550   23443 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:47:04.297634   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:47:04.297809   23443 start.go:360] acquireMachinesLock for ha-029113-m03: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:47:04.297851   23443 start.go:364] duration metric: took 25.027µs to acquireMachinesLock for "ha-029113-m03"
	I0717 00:47:04.297867   23443 start.go:93] Provisioning new machine with config: &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:47:04.297953   23443 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0717 00:47:04.299345   23443 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:47:04.299455   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:47:04.299497   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:47:04.314205   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
	I0717 00:47:04.314783   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:47:04.315268   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:47:04.315290   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:47:04.315618   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:47:04.315823   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetMachineName
	I0717 00:47:04.315982   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:04.316142   23443 start.go:159] libmachine.API.Create for "ha-029113" (driver="kvm2")
	I0717 00:47:04.316175   23443 client.go:168] LocalClient.Create starting
	I0717 00:47:04.316220   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 00:47:04.316260   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:47:04.316282   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:47:04.316342   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 00:47:04.316367   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:47:04.316384   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:47:04.316409   23443 main.go:141] libmachine: Running pre-create checks...
	I0717 00:47:04.316420   23443 main.go:141] libmachine: (ha-029113-m03) Calling .PreCreateCheck
	I0717 00:47:04.316582   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetConfigRaw
	I0717 00:47:04.316987   23443 main.go:141] libmachine: Creating machine...
	I0717 00:47:04.317003   23443 main.go:141] libmachine: (ha-029113-m03) Calling .Create
	I0717 00:47:04.317147   23443 main.go:141] libmachine: (ha-029113-m03) Creating KVM machine...
	I0717 00:47:04.318346   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found existing default KVM network
	I0717 00:47:04.318500   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found existing private KVM network mk-ha-029113
	I0717 00:47:04.318661   23443 main.go:141] libmachine: (ha-029113-m03) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03 ...
	I0717 00:47:04.318684   23443 main.go:141] libmachine: (ha-029113-m03) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 00:47:04.318744   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:04.318656   24534 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:47:04.318858   23443 main.go:141] libmachine: (ha-029113-m03) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 00:47:04.534160   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:04.534009   24534 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa...
	I0717 00:47:04.597323   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:04.597226   24534 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/ha-029113-m03.rawdisk...
	I0717 00:47:04.597353   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Writing magic tar header
	I0717 00:47:04.597367   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Writing SSH key tar header
	I0717 00:47:04.597378   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:04.597333   24534 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03 ...
	I0717 00:47:04.597451   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03
	I0717 00:47:04.597482   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03 (perms=drwx------)
	I0717 00:47:04.597494   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 00:47:04.597527   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:47:04.597552   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 00:47:04.597566   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:47:04.597589   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 00:47:04.597602   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 00:47:04.597613   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:47:04.597630   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:47:04.597643   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home
	I0717 00:47:04.597664   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Skipping /home - not owner
	I0717 00:47:04.597677   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:47:04.597688   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:47:04.597706   23443 main.go:141] libmachine: (ha-029113-m03) Creating domain...
	I0717 00:47:04.598438   23443 main.go:141] libmachine: (ha-029113-m03) define libvirt domain using xml: 
	I0717 00:47:04.598457   23443 main.go:141] libmachine: (ha-029113-m03) <domain type='kvm'>
	I0717 00:47:04.598494   23443 main.go:141] libmachine: (ha-029113-m03)   <name>ha-029113-m03</name>
	I0717 00:47:04.598522   23443 main.go:141] libmachine: (ha-029113-m03)   <memory unit='MiB'>2200</memory>
	I0717 00:47:04.598531   23443 main.go:141] libmachine: (ha-029113-m03)   <vcpu>2</vcpu>
	I0717 00:47:04.598537   23443 main.go:141] libmachine: (ha-029113-m03)   <features>
	I0717 00:47:04.598545   23443 main.go:141] libmachine: (ha-029113-m03)     <acpi/>
	I0717 00:47:04.598570   23443 main.go:141] libmachine: (ha-029113-m03)     <apic/>
	I0717 00:47:04.598595   23443 main.go:141] libmachine: (ha-029113-m03)     <pae/>
	I0717 00:47:04.598617   23443 main.go:141] libmachine: (ha-029113-m03)     
	I0717 00:47:04.598626   23443 main.go:141] libmachine: (ha-029113-m03)   </features>
	I0717 00:47:04.598638   23443 main.go:141] libmachine: (ha-029113-m03)   <cpu mode='host-passthrough'>
	I0717 00:47:04.598648   23443 main.go:141] libmachine: (ha-029113-m03)   
	I0717 00:47:04.598657   23443 main.go:141] libmachine: (ha-029113-m03)   </cpu>
	I0717 00:47:04.598668   23443 main.go:141] libmachine: (ha-029113-m03)   <os>
	I0717 00:47:04.598677   23443 main.go:141] libmachine: (ha-029113-m03)     <type>hvm</type>
	I0717 00:47:04.598695   23443 main.go:141] libmachine: (ha-029113-m03)     <boot dev='cdrom'/>
	I0717 00:47:04.598712   23443 main.go:141] libmachine: (ha-029113-m03)     <boot dev='hd'/>
	I0717 00:47:04.598726   23443 main.go:141] libmachine: (ha-029113-m03)     <bootmenu enable='no'/>
	I0717 00:47:04.598735   23443 main.go:141] libmachine: (ha-029113-m03)   </os>
	I0717 00:47:04.598744   23443 main.go:141] libmachine: (ha-029113-m03)   <devices>
	I0717 00:47:04.598752   23443 main.go:141] libmachine: (ha-029113-m03)     <disk type='file' device='cdrom'>
	I0717 00:47:04.598763   23443 main.go:141] libmachine: (ha-029113-m03)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/boot2docker.iso'/>
	I0717 00:47:04.598772   23443 main.go:141] libmachine: (ha-029113-m03)       <target dev='hdc' bus='scsi'/>
	I0717 00:47:04.598780   23443 main.go:141] libmachine: (ha-029113-m03)       <readonly/>
	I0717 00:47:04.598792   23443 main.go:141] libmachine: (ha-029113-m03)     </disk>
	I0717 00:47:04.598805   23443 main.go:141] libmachine: (ha-029113-m03)     <disk type='file' device='disk'>
	I0717 00:47:04.598817   23443 main.go:141] libmachine: (ha-029113-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:47:04.598834   23443 main.go:141] libmachine: (ha-029113-m03)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/ha-029113-m03.rawdisk'/>
	I0717 00:47:04.598844   23443 main.go:141] libmachine: (ha-029113-m03)       <target dev='hda' bus='virtio'/>
	I0717 00:47:04.598852   23443 main.go:141] libmachine: (ha-029113-m03)     </disk>
	I0717 00:47:04.598864   23443 main.go:141] libmachine: (ha-029113-m03)     <interface type='network'>
	I0717 00:47:04.598875   23443 main.go:141] libmachine: (ha-029113-m03)       <source network='mk-ha-029113'/>
	I0717 00:47:04.598885   23443 main.go:141] libmachine: (ha-029113-m03)       <model type='virtio'/>
	I0717 00:47:04.598898   23443 main.go:141] libmachine: (ha-029113-m03)     </interface>
	I0717 00:47:04.598914   23443 main.go:141] libmachine: (ha-029113-m03)     <interface type='network'>
	I0717 00:47:04.598925   23443 main.go:141] libmachine: (ha-029113-m03)       <source network='default'/>
	I0717 00:47:04.598932   23443 main.go:141] libmachine: (ha-029113-m03)       <model type='virtio'/>
	I0717 00:47:04.598939   23443 main.go:141] libmachine: (ha-029113-m03)     </interface>
	I0717 00:47:04.598945   23443 main.go:141] libmachine: (ha-029113-m03)     <serial type='pty'>
	I0717 00:47:04.598952   23443 main.go:141] libmachine: (ha-029113-m03)       <target port='0'/>
	I0717 00:47:04.598957   23443 main.go:141] libmachine: (ha-029113-m03)     </serial>
	I0717 00:47:04.598966   23443 main.go:141] libmachine: (ha-029113-m03)     <console type='pty'>
	I0717 00:47:04.598972   23443 main.go:141] libmachine: (ha-029113-m03)       <target type='serial' port='0'/>
	I0717 00:47:04.598977   23443 main.go:141] libmachine: (ha-029113-m03)     </console>
	I0717 00:47:04.598984   23443 main.go:141] libmachine: (ha-029113-m03)     <rng model='virtio'>
	I0717 00:47:04.598992   23443 main.go:141] libmachine: (ha-029113-m03)       <backend model='random'>/dev/random</backend>
	I0717 00:47:04.598997   23443 main.go:141] libmachine: (ha-029113-m03)     </rng>
	I0717 00:47:04.599003   23443 main.go:141] libmachine: (ha-029113-m03)     
	I0717 00:47:04.599008   23443 main.go:141] libmachine: (ha-029113-m03)     
	I0717 00:47:04.599012   23443 main.go:141] libmachine: (ha-029113-m03)   </devices>
	I0717 00:47:04.599018   23443 main.go:141] libmachine: (ha-029113-m03) </domain>
	I0717 00:47:04.599024   23443 main.go:141] libmachine: (ha-029113-m03) 
	I0717 00:47:04.605647   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:d3:7c:43 in network default
	I0717 00:47:04.606213   23443 main.go:141] libmachine: (ha-029113-m03) Ensuring networks are active...
	I0717 00:47:04.606235   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:04.606899   23443 main.go:141] libmachine: (ha-029113-m03) Ensuring network default is active
	I0717 00:47:04.607158   23443 main.go:141] libmachine: (ha-029113-m03) Ensuring network mk-ha-029113 is active
	I0717 00:47:04.607510   23443 main.go:141] libmachine: (ha-029113-m03) Getting domain xml...
	I0717 00:47:04.608189   23443 main.go:141] libmachine: (ha-029113-m03) Creating domain...
	I0717 00:47:05.845798   23443 main.go:141] libmachine: (ha-029113-m03) Waiting to get IP...
	I0717 00:47:05.846661   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:05.847143   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:05.847176   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:05.847123   24534 retry.go:31] will retry after 298.775965ms: waiting for machine to come up
	I0717 00:47:06.147576   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:06.148074   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:06.148100   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:06.148030   24534 retry.go:31] will retry after 321.272545ms: waiting for machine to come up
	I0717 00:47:06.470416   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:06.470932   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:06.470967   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:06.470880   24534 retry.go:31] will retry after 313.273746ms: waiting for machine to come up
	I0717 00:47:06.785183   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:06.785593   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:06.785618   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:06.785553   24534 retry.go:31] will retry after 599.715441ms: waiting for machine to come up
	I0717 00:47:07.387438   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:07.387895   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:07.387922   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:07.387828   24534 retry.go:31] will retry after 617.925829ms: waiting for machine to come up
	I0717 00:47:08.007558   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:08.008055   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:08.008085   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:08.008016   24534 retry.go:31] will retry after 732.559545ms: waiting for machine to come up
	I0717 00:47:08.742239   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:08.742735   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:08.742763   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:08.742690   24534 retry.go:31] will retry after 953.977069ms: waiting for machine to come up
	I0717 00:47:09.697917   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:09.698323   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:09.698349   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:09.698272   24534 retry.go:31] will retry after 956.736439ms: waiting for machine to come up
	I0717 00:47:10.656643   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:10.657148   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:10.657182   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:10.657088   24534 retry.go:31] will retry after 1.749286774s: waiting for machine to come up
	I0717 00:47:12.407663   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:12.408103   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:12.408128   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:12.408055   24534 retry.go:31] will retry after 1.683433342s: waiting for machine to come up
	I0717 00:47:14.094008   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:14.094391   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:14.094412   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:14.094367   24534 retry.go:31] will retry after 2.783450641s: waiting for machine to come up
	I0717 00:47:16.879558   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:16.879975   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:16.879998   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:16.879938   24534 retry.go:31] will retry after 2.670963884s: waiting for machine to come up
	I0717 00:47:19.552112   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:19.552483   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:19.552508   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:19.552448   24534 retry.go:31] will retry after 3.996912103s: waiting for machine to come up
	I0717 00:47:23.551675   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:23.552163   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:23.552190   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:23.552121   24534 retry.go:31] will retry after 4.733416289s: waiting for machine to come up
	I0717 00:47:28.290235   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.290702   23443 main.go:141] libmachine: (ha-029113-m03) Found IP for machine: 192.168.39.100
	I0717 00:47:28.290720   23443 main.go:141] libmachine: (ha-029113-m03) Reserving static IP address...
	I0717 00:47:28.290734   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has current primary IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.291086   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find host DHCP lease matching {name: "ha-029113-m03", mac: "52:54:00:30:b5:1d", ip: "192.168.39.100"} in network mk-ha-029113
	I0717 00:47:28.361256   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Getting to WaitForSSH function...
	I0717 00:47:28.361291   23443 main.go:141] libmachine: (ha-029113-m03) Reserved static IP address: 192.168.39.100
	I0717 00:47:28.361309   23443 main.go:141] libmachine: (ha-029113-m03) Waiting for SSH to be available...
	I0717 00:47:28.363907   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.364272   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.364291   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.364496   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Using SSH client type: external
	I0717 00:47:28.364543   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa (-rw-------)
	I0717 00:47:28.364574   23443 main.go:141] libmachine: (ha-029113-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:47:28.364591   23443 main.go:141] libmachine: (ha-029113-m03) DBG | About to run SSH command:
	I0717 00:47:28.364607   23443 main.go:141] libmachine: (ha-029113-m03) DBG | exit 0
	I0717 00:47:28.490532   23443 main.go:141] libmachine: (ha-029113-m03) DBG | SSH cmd err, output: <nil>: 
	I0717 00:47:28.490841   23443 main.go:141] libmachine: (ha-029113-m03) KVM machine creation complete!
	I0717 00:47:28.491108   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetConfigRaw
	I0717 00:47:28.491707   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:28.491898   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:28.492104   23443 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:47:28.492120   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:47:28.493332   23443 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:47:28.493349   23443 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:47:28.493363   23443 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:47:28.493372   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:28.495810   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.496236   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.496288   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.496385   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:28.496571   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.496733   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.496868   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:28.497033   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:28.497243   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:28.497258   23443 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:47:28.601770   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:47:28.601794   23443 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:47:28.601803   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:28.604492   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.604842   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.604870   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.605008   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:28.605205   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.605349   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.605465   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:28.605623   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:28.605786   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:28.605798   23443 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:47:28.711191   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:47:28.711241   23443 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:47:28.711248   23443 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:47:28.711255   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetMachineName
	I0717 00:47:28.711535   23443 buildroot.go:166] provisioning hostname "ha-029113-m03"
	I0717 00:47:28.711564   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetMachineName
	I0717 00:47:28.711760   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:28.714290   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.714691   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.714727   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.714899   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:28.715064   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.715231   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.715397   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:28.715566   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:28.715763   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:28.715781   23443 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-029113-m03 && echo "ha-029113-m03" | sudo tee /etc/hostname
	I0717 00:47:28.834032   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-029113-m03
	
	I0717 00:47:28.834059   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:28.836653   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.837041   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.837073   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.837227   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:28.837410   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.837571   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.837717   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:28.837862   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:28.838032   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:28.838048   23443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-029113-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-029113-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-029113-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:47:28.947084   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:47:28.947117   23443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 00:47:28.947131   23443 buildroot.go:174] setting up certificates
	I0717 00:47:28.947140   23443 provision.go:84] configureAuth start
	I0717 00:47:28.947149   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetMachineName
	I0717 00:47:28.947410   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:47:28.949894   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.950247   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.950271   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.950391   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:28.952445   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.952785   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.952811   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.952913   23443 provision.go:143] copyHostCerts
	I0717 00:47:28.952943   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:47:28.952982   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 00:47:28.952994   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:47:28.953074   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 00:47:28.953163   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:47:28.953187   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 00:47:28.953194   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:47:28.953233   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 00:47:28.953293   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:47:28.953315   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 00:47:28.953324   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:47:28.953356   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 00:47:28.953426   23443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.ha-029113-m03 san=[127.0.0.1 192.168.39.100 ha-029113-m03 localhost minikube]
	I0717 00:47:29.050507   23443 provision.go:177] copyRemoteCerts
	I0717 00:47:29.050585   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:47:29.050613   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:29.053185   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.053533   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.053557   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.053726   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.053901   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.054057   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.054204   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:47:29.138459   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:47:29.138522   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:47:29.162967   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:47:29.163027   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:47:29.186653   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:47:29.186730   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:47:29.209623   23443 provision.go:87] duration metric: took 262.471359ms to configureAuth
	I0717 00:47:29.209654   23443 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:47:29.209857   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:47:29.209928   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:29.212618   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.212936   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.212963   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.213136   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.213327   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.213487   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.213633   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.213780   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:29.213971   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:29.213993   23443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:47:29.481929   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:47:29.481956   23443 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:47:29.481968   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetURL
	I0717 00:47:29.483185   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Using libvirt version 6000000
	I0717 00:47:29.486435   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.486892   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.486923   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.487078   23443 main.go:141] libmachine: Docker is up and running!
	I0717 00:47:29.487088   23443 main.go:141] libmachine: Reticulating splines...
	I0717 00:47:29.487094   23443 client.go:171] duration metric: took 25.170910202s to LocalClient.Create
	I0717 00:47:29.487115   23443 start.go:167] duration metric: took 25.170975292s to libmachine.API.Create "ha-029113"
	I0717 00:47:29.487126   23443 start.go:293] postStartSetup for "ha-029113-m03" (driver="kvm2")
	I0717 00:47:29.487139   23443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:47:29.487161   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:29.487395   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:47:29.487431   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:29.489957   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.490360   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.490385   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.490534   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.490730   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.490865   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.490995   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:47:29.577160   23443 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:47:29.581443   23443 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:47:29.581469   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 00:47:29.581544   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 00:47:29.581652   23443 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 00:47:29.581666   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /etc/ssl/certs/112592.pem
	I0717 00:47:29.581789   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:47:29.591763   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:47:29.615917   23443 start.go:296] duration metric: took 128.779151ms for postStartSetup
	I0717 00:47:29.615972   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetConfigRaw
	I0717 00:47:29.616577   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:47:29.619288   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.619666   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.619691   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.619973   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:47:29.620190   23443 start.go:128] duration metric: took 25.32222776s to createHost
	I0717 00:47:29.620213   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:29.622028   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.622319   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.622342   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.622518   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.622708   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.622870   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.622999   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.623167   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:29.623330   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:29.623341   23443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:47:29.727385   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177249.704991971
	
	I0717 00:47:29.727403   23443 fix.go:216] guest clock: 1721177249.704991971
	I0717 00:47:29.727411   23443 fix.go:229] Guest: 2024-07-17 00:47:29.704991971 +0000 UTC Remote: 2024-07-17 00:47:29.620202081 +0000 UTC m=+240.022477234 (delta=84.78989ms)
	I0717 00:47:29.727429   23443 fix.go:200] guest clock delta is within tolerance: 84.78989ms
	I0717 00:47:29.727436   23443 start.go:83] releasing machines lock for "ha-029113-m03", held for 25.429576063s
	I0717 00:47:29.727468   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:29.727789   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:47:29.730318   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.730741   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.730768   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.732706   23443 out.go:177] * Found network options:
	I0717 00:47:29.734087   23443 out.go:177]   - NO_PROXY=192.168.39.95,192.168.39.166
	W0717 00:47:29.735301   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 00:47:29.735332   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:47:29.735348   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:29.735851   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:29.736040   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:29.736114   23443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:47:29.736153   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	W0717 00:47:29.736251   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 00:47:29.736274   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:47:29.736336   23443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:47:29.736352   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:29.738604   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.738817   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.739046   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.739070   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.739188   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.739311   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.739333   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.739376   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.739498   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.739580   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.739647   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.739726   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:47:29.739770   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.739875   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:47:29.970998   23443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:47:29.977841   23443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:47:29.977909   23443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:47:29.994601   23443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:47:29.994622   23443 start.go:495] detecting cgroup driver to use...
	I0717 00:47:29.994700   23443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:47:30.011004   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:47:30.024819   23443 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:47:30.024876   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:47:30.038454   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:47:30.052342   23443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:47:30.168997   23443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:47:30.336485   23443 docker.go:233] disabling docker service ...
	I0717 00:47:30.336553   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:47:30.351582   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:47:30.364131   23443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:47:30.484186   23443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:47:30.608256   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:47:30.622449   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:47:30.641842   23443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:47:30.641903   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.652041   23443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:47:30.652098   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.661887   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.671785   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.681613   23443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:47:30.692189   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.702117   23443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.718565   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.728992   23443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:47:30.740257   23443 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:47:30.740319   23443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:47:30.754046   23443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:47:30.766384   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:47:30.887467   23443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:47:31.028626   23443 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:47:31.028709   23443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:47:31.034326   23443 start.go:563] Will wait 60s for crictl version
	I0717 00:47:31.034380   23443 ssh_runner.go:195] Run: which crictl
	I0717 00:47:31.038352   23443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:47:31.081500   23443 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:47:31.081582   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:47:31.112415   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:47:31.143120   23443 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:47:31.144618   23443 out.go:177]   - env NO_PROXY=192.168.39.95
	I0717 00:47:31.146006   23443 out.go:177]   - env NO_PROXY=192.168.39.95,192.168.39.166
	I0717 00:47:31.147439   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:47:31.149878   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:31.150222   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:31.150242   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:31.150430   23443 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:47:31.155114   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:47:31.167558   23443 mustload.go:65] Loading cluster: ha-029113
	I0717 00:47:31.167744   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:47:31.167996   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:47:31.168025   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:47:31.183282   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0717 00:47:31.183707   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:47:31.184126   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:47:31.184140   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:47:31.184450   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:47:31.184627   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:47:31.186188   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:47:31.186503   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:47:31.186534   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:47:31.200721   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0717 00:47:31.201125   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:47:31.201501   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:47:31.201522   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:47:31.201800   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:47:31.201960   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:47:31.202110   23443 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113 for IP: 192.168.39.100
	I0717 00:47:31.202122   23443 certs.go:194] generating shared ca certs ...
	I0717 00:47:31.202137   23443 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:47:31.202283   23443 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 00:47:31.202327   23443 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 00:47:31.202339   23443 certs.go:256] generating profile certs ...
	I0717 00:47:31.202432   23443 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key
	I0717 00:47:31.202464   23443 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.365995e4
	I0717 00:47:31.202483   23443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.365995e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.95 192.168.39.166 192.168.39.100 192.168.39.254]
	I0717 00:47:31.392167   23443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.365995e4 ...
	I0717 00:47:31.392197   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.365995e4: {Name:mk26a48a79f686a9e1a613e3ea8d71075ef49720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:47:31.392355   23443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.365995e4 ...
	I0717 00:47:31.392368   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.365995e4: {Name:mk416a12e41b00c2f47831d1494d44e481bc26ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:47:31.392446   23443 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.365995e4 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt
	I0717 00:47:31.392577   23443 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.365995e4 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key
	I0717 00:47:31.392696   23443 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key
	I0717 00:47:31.392710   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:47:31.392722   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:47:31.392740   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:47:31.392753   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:47:31.392764   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:47:31.392776   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:47:31.392789   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:47:31.392800   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:47:31.392843   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 00:47:31.392868   23443 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 00:47:31.392877   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:47:31.392898   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:47:31.392918   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:47:31.392938   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 00:47:31.392972   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:47:31.392995   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem -> /usr/share/ca-certificates/11259.pem
	I0717 00:47:31.393009   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /usr/share/ca-certificates/112592.pem
	I0717 00:47:31.393021   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:47:31.393047   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:47:31.395968   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:47:31.396353   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:47:31.396374   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:47:31.396544   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:47:31.396748   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:47:31.396892   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:47:31.397011   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:47:31.467015   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 00:47:31.472607   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 00:47:31.484769   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 00:47:31.488879   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 00:47:31.500546   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 00:47:31.504755   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 00:47:31.521100   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 00:47:31.529522   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 00:47:31.544067   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 00:47:31.548468   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 00:47:31.559844   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 00:47:31.564482   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 00:47:31.575658   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:47:31.603663   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:47:31.629107   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:47:31.652484   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:47:31.677959   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0717 00:47:31.700927   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:47:31.728471   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:47:31.752347   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:47:31.783749   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 00:47:31.809217   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 00:47:31.833961   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:47:31.856783   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 00:47:31.872709   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 00:47:31.888653   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 00:47:31.904254   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 00:47:31.920382   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 00:47:31.937130   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 00:47:31.956467   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 00:47:31.975090   23443 ssh_runner.go:195] Run: openssl version
	I0717 00:47:31.981626   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 00:47:31.993439   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 00:47:31.997968   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 00:47:31.998015   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 00:47:32.003758   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 00:47:32.014696   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 00:47:32.026310   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 00:47:32.030909   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 00:47:32.030964   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 00:47:32.036404   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:47:32.047633   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:47:32.059069   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:47:32.063777   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:47:32.063824   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:47:32.069764   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:47:32.080802   23443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:47:32.084841   23443 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:47:32.084892   23443 kubeadm.go:934] updating node {m03 192.168.39.100 8443 v1.30.2 crio true true} ...
	I0717 00:47:32.084991   23443 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-029113-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:47:32.085021   23443 kube-vip.go:115] generating kube-vip config ...
	I0717 00:47:32.085058   23443 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:47:32.102888   23443 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:47:32.102959   23443 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:47:32.103026   23443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:47:32.113632   23443 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 00:47:32.113689   23443 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 00:47:32.123871   23443 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0717 00:47:32.123871   23443 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0717 00:47:32.123897   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:47:32.123886   23443 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 00:47:32.123931   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:47:32.123940   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:47:32.124004   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:47:32.124006   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:47:32.144573   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:47:32.144592   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 00:47:32.144615   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 00:47:32.144670   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:47:32.144668   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 00:47:32.144728   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 00:47:32.176688   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 00:47:32.176741   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 00:47:33.056190   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 00:47:33.065912   23443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 00:47:33.082651   23443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:47:33.102312   23443 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:47:33.121683   23443 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:47:33.125753   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:47:33.138778   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:47:33.274974   23443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:47:33.293487   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:47:33.293852   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:47:33.293891   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:47:33.311122   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I0717 00:47:33.311526   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:47:33.311975   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:47:33.312002   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:47:33.312300   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:47:33.312467   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:47:33.312581   23443 start.go:317] joinCluster: &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:47:33.312738   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 00:47:33.312757   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:47:33.315444   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:47:33.315846   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:47:33.315876   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:47:33.316004   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:47:33.316178   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:47:33.316334   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:47:33.316464   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:47:33.479247   23443 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:47:33.479311   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mff843.ivzjp3mgt4opug4n --discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-029113-m03 --control-plane --apiserver-advertise-address=192.168.39.100 --apiserver-bind-port=8443"
	I0717 00:47:56.957281   23443 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mff843.ivzjp3mgt4opug4n --discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-029113-m03 --control-plane --apiserver-advertise-address=192.168.39.100 --apiserver-bind-port=8443": (23.477950581s)
	I0717 00:47:56.957310   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 00:47:57.410535   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-029113-m03 minikube.k8s.io/updated_at=2024_07_17T00_47_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=ha-029113 minikube.k8s.io/primary=false
	I0717 00:47:57.567951   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-029113-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 00:47:57.677442   23443 start.go:319] duration metric: took 24.364856951s to joinCluster
	I0717 00:47:57.677512   23443 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:47:57.677937   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:47:57.679198   23443 out.go:177] * Verifying Kubernetes components...
	I0717 00:47:57.680680   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:47:57.902672   23443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:47:57.932057   23443 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:47:57.932409   23443 kapi.go:59] client config for ha-029113: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt", KeyFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key", CAFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 00:47:57.932505   23443 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.95:8443
	I0717 00:47:57.932785   23443 node_ready.go:35] waiting up to 6m0s for node "ha-029113-m03" to be "Ready" ...
	I0717 00:47:57.932873   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:47:57.932884   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:57.932894   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:57.932904   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:57.936371   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:58.433543   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:47:58.433563   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:58.433572   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:58.433577   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:58.437308   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:58.933289   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:47:58.933308   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:58.933316   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:58.933320   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:58.936300   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:59.433871   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:47:59.433895   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:59.433906   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:59.433912   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:59.437298   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:59.933021   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:47:59.933049   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:59.933060   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:59.933066   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:59.936551   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:59.937335   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:00.433793   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:00.433814   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:00.433822   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:00.433827   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:00.436719   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:00.932935   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:00.932955   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:00.932962   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:00.932968   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:00.936258   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:01.433143   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:01.433162   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:01.433170   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:01.433176   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:01.436031   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:01.933567   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:01.933591   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:01.933603   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:01.933609   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:01.936349   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:02.433663   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:02.433684   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:02.433691   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:02.433697   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:02.437496   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:02.438053   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:02.933029   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:02.933049   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:02.933057   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:02.933062   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:02.936712   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:03.433955   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:03.433976   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:03.433991   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:03.433995   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:03.496498   23443 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I0717 00:48:03.933003   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:03.933019   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:03.933026   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:03.933030   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:03.941564   23443 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:48:04.433937   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:04.433965   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:04.433977   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:04.433989   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:04.436860   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:04.933678   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:04.933698   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:04.933705   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:04.933710   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:04.936447   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:04.936962   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:05.433337   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:05.433357   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:05.433365   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:05.433369   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:05.436408   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:05.933306   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:05.933326   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:05.933334   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:05.933337   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:05.936312   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:06.433048   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:06.433073   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:06.433084   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:06.433088   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:06.436215   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:06.933678   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:06.933701   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:06.933709   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:06.933715   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:06.936895   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:06.937759   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:07.433553   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:07.433588   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:07.433598   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:07.433603   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:07.436915   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:07.934007   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:07.934032   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:07.934043   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:07.934048   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:07.936894   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:08.433276   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:08.433306   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:08.433317   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:08.433322   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:08.436327   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:08.932987   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:08.933013   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:08.933025   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:08.933030   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:08.936170   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:09.433451   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:09.433471   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:09.433479   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:09.433482   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:09.436367   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:09.436924   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:09.933014   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:09.933035   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:09.933042   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:09.933046   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:09.936948   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:10.433011   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:10.433044   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:10.433052   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:10.433057   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:10.435847   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:10.933063   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:10.933083   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:10.933090   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:10.933095   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:10.936799   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:11.433940   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:11.433965   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:11.433974   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:11.433984   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:11.437030   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:11.437574   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:11.933479   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:11.933498   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:11.933507   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:11.933511   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:11.936963   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:12.433677   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:12.433698   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:12.433706   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:12.433708   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:12.436924   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:12.933778   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:12.933800   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:12.933806   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:12.933811   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:12.936870   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:13.433423   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:13.433448   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:13.433458   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:13.433463   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:13.436764   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:13.437713   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:13.932967   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:13.932994   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:13.933002   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:13.933005   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:13.935973   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:14.433679   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:14.433706   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:14.433718   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:14.433724   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:14.436962   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:14.933360   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:14.933382   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:14.933393   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:14.933400   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:14.936409   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.433574   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:15.433595   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.433602   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.433607   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.436779   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:15.933878   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:15.933903   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.933913   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.933927   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.937273   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:15.938065   23443 node_ready.go:49] node "ha-029113-m03" has status "Ready":"True"
	I0717 00:48:15.938081   23443 node_ready.go:38] duration metric: took 18.00527454s for node "ha-029113-m03" to be "Ready" ...
	I0717 00:48:15.938088   23443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:48:15.938152   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:48:15.938163   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.938170   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.938174   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.946231   23443 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:48:15.953641   23443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.953724   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-62m67
	I0717 00:48:15.953740   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.953749   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.953756   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.956529   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.957165   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:15.957180   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.957188   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.957192   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.959884   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.960571   23443 pod_ready.go:92] pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:15.960588   23443 pod_ready.go:81] duration metric: took 6.922784ms for pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.960597   23443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.960646   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdlls
	I0717 00:48:15.960652   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.960660   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.960667   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.963898   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:15.964669   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:15.964687   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.964696   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.964700   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.967035   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.967677   23443 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:15.967697   23443 pod_ready.go:81] duration metric: took 7.091028ms for pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.967709   23443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.967769   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113
	I0717 00:48:15.967779   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.967786   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.967790   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.970615   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.971077   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:15.971090   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.971095   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.971099   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.973869   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.974732   23443 pod_ready.go:92] pod "etcd-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:15.974748   23443 pod_ready.go:81] duration metric: took 7.032362ms for pod "etcd-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.974757   23443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.974806   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113-m02
	I0717 00:48:15.974813   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.974820   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.974824   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.978355   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:15.979508   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:15.979523   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.979533   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.979539   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.983040   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:15.983610   23443 pod_ready.go:92] pod "etcd-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:15.983628   23443 pod_ready.go:81] duration metric: took 8.864021ms for pod "etcd-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.983641   23443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:16.134856   23443 request.go:629] Waited for 151.156525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113-m03
	I0717 00:48:16.134906   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113-m03
	I0717 00:48:16.134910   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:16.134918   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:16.134922   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:16.138241   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:16.334752   23443 request.go:629] Waited for 195.779503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:16.334831   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:16.334841   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:16.334852   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:16.334861   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:16.338029   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:16.338660   23443 pod_ready.go:92] pod "etcd-ha-029113-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:16.338700   23443 pod_ready.go:81] duration metric: took 355.052268ms for pod "etcd-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:16.338727   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:16.534754   23443 request.go:629] Waited for 195.96079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113
	I0717 00:48:16.534812   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113
	I0717 00:48:16.534827   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:16.534837   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:16.534841   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:16.538043   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:16.734454   23443 request.go:629] Waited for 195.445196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:16.734510   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:16.734515   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:16.734524   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:16.734527   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:16.737294   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:16.737771   23443 pod_ready.go:92] pod "kube-apiserver-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:16.737788   23443 pod_ready.go:81] duration metric: took 399.053607ms for pod "kube-apiserver-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:16.737799   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:16.934908   23443 request.go:629] Waited for 197.024735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m02
	I0717 00:48:16.934979   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m02
	I0717 00:48:16.934990   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:16.935001   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:16.935013   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:16.938584   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:17.134573   23443 request.go:629] Waited for 195.314524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:17.134637   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:17.134643   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:17.134650   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:17.134653   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:17.137787   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:17.138238   23443 pod_ready.go:92] pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:17.138256   23443 pod_ready.go:81] duration metric: took 400.449501ms for pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:17.138264   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:17.334784   23443 request.go:629] Waited for 196.459661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m03
	I0717 00:48:17.334846   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m03
	I0717 00:48:17.334853   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:17.334865   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:17.334873   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:17.338260   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:17.534348   23443 request.go:629] Waited for 195.283689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:17.534394   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:17.534399   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:17.534406   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:17.534410   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:17.538851   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:48:17.540642   23443 pod_ready.go:92] pod "kube-apiserver-ha-029113-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:17.540666   23443 pod_ready.go:81] duration metric: took 402.39493ms for pod "kube-apiserver-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:17.540680   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:17.733930   23443 request.go:629] Waited for 193.162359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113
	I0717 00:48:17.733981   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113
	I0717 00:48:17.733986   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:17.733995   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:17.734000   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:17.737429   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:17.934571   23443 request.go:629] Waited for 196.349148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:17.934634   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:17.934642   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:17.934653   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:17.934660   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:17.937910   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:17.938631   23443 pod_ready.go:92] pod "kube-controller-manager-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:17.938651   23443 pod_ready.go:81] duration metric: took 397.960924ms for pod "kube-controller-manager-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:17.938663   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:18.134744   23443 request.go:629] Waited for 196.013809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m02
	I0717 00:48:18.134819   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m02
	I0717 00:48:18.134828   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:18.134836   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:18.134843   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:18.137845   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:18.334771   23443 request.go:629] Waited for 196.387557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:18.334820   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:18.334825   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:18.334833   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:18.334836   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:18.337818   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:18.338412   23443 pod_ready.go:92] pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:18.338427   23443 pod_ready.go:81] duration metric: took 399.756138ms for pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:18.338436   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:18.534522   23443 request.go:629] Waited for 196.008108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m03
	I0717 00:48:18.534608   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m03
	I0717 00:48:18.534619   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:18.534630   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:18.534641   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:18.538673   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:48:18.734948   23443 request.go:629] Waited for 195.373322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:18.735011   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:18.735016   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:18.735023   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:18.735028   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:18.738034   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:18.738747   23443 pod_ready.go:92] pod "kube-controller-manager-ha-029113-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:18.738767   23443 pod_ready.go:81] duration metric: took 400.324386ms for pod "kube-controller-manager-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:18.738781   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wz5p" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:18.934691   23443 request.go:629] Waited for 195.853895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wz5p
	I0717 00:48:18.934774   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wz5p
	I0717 00:48:18.934785   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:18.934795   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:18.934801   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:18.940740   23443 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:48:19.134703   23443 request.go:629] Waited for 193.293844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:19.134771   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:19.134777   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:19.134789   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:19.134797   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:19.137684   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:19.138254   23443 pod_ready.go:92] pod "kube-proxy-2wz5p" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:19.138271   23443 pod_ready.go:81] duration metric: took 399.483256ms for pod "kube-proxy-2wz5p" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:19.138285   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hg2kp" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:19.334778   23443 request.go:629] Waited for 196.413518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2kp
	I0717 00:48:19.334827   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2kp
	I0717 00:48:19.334834   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:19.334845   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:19.334852   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:19.337998   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:19.534911   23443 request.go:629] Waited for 196.20071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:19.534980   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:19.534993   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:19.535001   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:19.535006   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:19.538570   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:19.539013   23443 pod_ready.go:92] pod "kube-proxy-hg2kp" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:19.539030   23443 pod_ready.go:81] duration metric: took 400.733974ms for pod "kube-proxy-hg2kp" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:19.539042   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pfdt9" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:19.734427   23443 request.go:629] Waited for 195.31365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfdt9
	I0717 00:48:19.734520   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfdt9
	I0717 00:48:19.734530   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:19.734541   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:19.734565   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:19.737680   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:19.934615   23443 request.go:629] Waited for 196.257151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:19.934694   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:19.934703   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:19.934710   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:19.934717   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:19.937593   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:19.938204   23443 pod_ready.go:92] pod "kube-proxy-pfdt9" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:19.938223   23443 pod_ready.go:81] duration metric: took 399.17404ms for pod "kube-proxy-pfdt9" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:19.938234   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:20.134299   23443 request.go:629] Waited for 196.005753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113
	I0717 00:48:20.134363   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113
	I0717 00:48:20.134370   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:20.134379   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:20.134390   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:20.137348   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:20.334243   23443 request.go:629] Waited for 196.346653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:20.334302   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:20.334306   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:20.334313   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:20.334319   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:20.339195   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:48:20.339879   23443 pod_ready.go:92] pod "kube-scheduler-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:20.339896   23443 pod_ready.go:81] duration metric: took 401.652936ms for pod "kube-scheduler-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:20.339909   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:20.533935   23443 request.go:629] Waited for 193.946219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m02
	I0717 00:48:20.533986   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m02
	I0717 00:48:20.533993   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:20.534003   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:20.534008   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:20.537862   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:20.734575   23443 request.go:629] Waited for 196.172224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:20.734623   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:20.734628   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:20.734635   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:20.734640   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:20.737654   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:20.738134   23443 pod_ready.go:92] pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:20.738149   23443 pod_ready.go:81] duration metric: took 398.233343ms for pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:20.738158   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:20.934255   23443 request.go:629] Waited for 196.021247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m03
	I0717 00:48:20.934308   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m03
	I0717 00:48:20.934313   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:20.934321   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:20.934325   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:20.937565   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:21.134455   23443 request.go:629] Waited for 196.219116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:21.134502   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:21.134507   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.134514   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.134517   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.137844   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:21.138357   23443 pod_ready.go:92] pod "kube-scheduler-ha-029113-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:21.138372   23443 pod_ready.go:81] duration metric: took 400.207669ms for pod "kube-scheduler-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:21.138383   23443 pod_ready.go:38] duration metric: took 5.200283607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:48:21.138400   23443 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:48:21.138452   23443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:48:21.154541   23443 api_server.go:72] duration metric: took 23.476994283s to wait for apiserver process to appear ...
	I0717 00:48:21.154580   23443 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:48:21.154600   23443 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0717 00:48:21.160502   23443 api_server.go:279] https://192.168.39.95:8443/healthz returned 200:
	ok
	I0717 00:48:21.160577   23443 round_trippers.go:463] GET https://192.168.39.95:8443/version
	I0717 00:48:21.160589   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.160599   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.160608   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.161473   23443 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 00:48:21.161539   23443 api_server.go:141] control plane version: v1.30.2
	I0717 00:48:21.161556   23443 api_server.go:131] duration metric: took 6.970001ms to wait for apiserver health ...
	I0717 00:48:21.161563   23443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:48:21.334734   23443 request.go:629] Waited for 173.026013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:48:21.334795   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:48:21.334803   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.334813   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.334823   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.341100   23443 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:48:21.348589   23443 system_pods.go:59] 24 kube-system pods found
	I0717 00:48:21.348614   23443 system_pods.go:61] "coredns-7db6d8ff4d-62m67" [5029f9dc-6792-44d9-9296-ec5ab6d72274] Running
	I0717 00:48:21.348620   23443 system_pods.go:61] "coredns-7db6d8ff4d-xdlls" [4344b971-b979-42f8-8fa8-01f2d64bb51a] Running
	I0717 00:48:21.348625   23443 system_pods.go:61] "etcd-ha-029113" [10122569-9dc1-4680-8d11-aa7d4c719cec] Running
	I0717 00:48:21.348632   23443 system_pods.go:61] "etcd-ha-029113-m02" [a0f65752-ddcf-493d-bc0b-e4cb2ac8d635] Running
	I0717 00:48:21.348637   23443 system_pods.go:61] "etcd-ha-029113-m03" [9afc47a1-ab83-4976-bd8b-d40aa6360f2d] Running
	I0717 00:48:21.348643   23443 system_pods.go:61] "kindnet-8xg7d" [a612c634-49ef-4357-9b36-f5cc6604bdd7] Running
	I0717 00:48:21.348648   23443 system_pods.go:61] "kindnet-k2jgh" [8a8e5ffe-9541-4736-9584-b49727b4753e] Running
	I0717 00:48:21.348654   23443 system_pods.go:61] "kindnet-k7vzq" [8198e4a4-080e-482a-a0b3-58e796bdd230] Running
	I0717 00:48:21.348659   23443 system_pods.go:61] "kube-apiserver-ha-029113" [167d337c-6406-4f80-8a60-aebdca26066b] Running
	I0717 00:48:21.348668   23443 system_pods.go:61] "kube-apiserver-ha-029113-m02" [d64aa0f0-e41f-4a5e-b4fe-48665061673e] Running
	I0717 00:48:21.348673   23443 system_pods.go:61] "kube-apiserver-ha-029113-m03" [0b4ea48e-60dc-44ed-8d5d-1159f866bc24] Running
	I0717 00:48:21.348684   23443 system_pods.go:61] "kube-controller-manager-ha-029113" [8f1ee225-f6a3-4943-976a-9cc14607a654] Running
	I0717 00:48:21.348692   23443 system_pods.go:61] "kube-controller-manager-ha-029113-m02" [d180826c-b18e-49a7-8a1a-576c1a64fd51] Running
	I0717 00:48:21.348698   23443 system_pods.go:61] "kube-controller-manager-ha-029113-m03" [993c477b-441b-46a1-85b8-c8ba74df2f80] Running
	I0717 00:48:21.348706   23443 system_pods.go:61] "kube-proxy-2wz5p" [285b947d-fa11-40fb-befa-1fa4451787d4] Running
	I0717 00:48:21.348712   23443 system_pods.go:61] "kube-proxy-hg2kp" [db9243f4-bcc0-406a-a8f2-ccdbc00f6341] Running
	I0717 00:48:21.348719   23443 system_pods.go:61] "kube-proxy-pfdt9" [d5f82192-14de-46c6-b3f4-38d34b9e828a] Running
	I0717 00:48:21.348724   23443 system_pods.go:61] "kube-scheduler-ha-029113" [e3b5629d-5647-437e-a87c-0c91f2cd26d7] Running
	I0717 00:48:21.348729   23443 system_pods.go:61] "kube-scheduler-ha-029113-m02" [0f986464-8d17-4727-906b-4d8c58afbe5d] Running
	I0717 00:48:21.348734   23443 system_pods.go:61] "kube-scheduler-ha-029113-m03" [8a322ad0-c9fa-4586-9051-5b18efa5a9c0] Running
	I0717 00:48:21.348741   23443 system_pods.go:61] "kube-vip-ha-029113" [985763eb-2a45-4820-a3db-e2af6d9291e0] Running
	I0717 00:48:21.348746   23443 system_pods.go:61] "kube-vip-ha-029113-m02" [0d64dace-cdb3-4abb-8d92-b205dc611777] Running
	I0717 00:48:21.348750   23443 system_pods.go:61] "kube-vip-ha-029113-m03" [ca077479-311a-4e1a-b143-55678a21f744] Running
	I0717 00:48:21.348757   23443 system_pods.go:61] "storage-provisioner" [b9f04e5d-469e-4432-bd31-dbe772194f84] Running
	I0717 00:48:21.348765   23443 system_pods.go:74] duration metric: took 187.193375ms to wait for pod list to return data ...
	I0717 00:48:21.348778   23443 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:48:21.534176   23443 request.go:629] Waited for 185.334842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:48:21.534269   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:48:21.534278   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.534285   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.534289   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.538916   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:48:21.539036   23443 default_sa.go:45] found service account: "default"
	I0717 00:48:21.539052   23443 default_sa.go:55] duration metric: took 190.266774ms for default service account to be created ...
	I0717 00:48:21.539063   23443 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:48:21.734616   23443 request.go:629] Waited for 195.483278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:48:21.734687   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:48:21.734695   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.734702   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.734707   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.743367   23443 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:48:21.749735   23443 system_pods.go:86] 24 kube-system pods found
	I0717 00:48:21.749760   23443 system_pods.go:89] "coredns-7db6d8ff4d-62m67" [5029f9dc-6792-44d9-9296-ec5ab6d72274] Running
	I0717 00:48:21.749767   23443 system_pods.go:89] "coredns-7db6d8ff4d-xdlls" [4344b971-b979-42f8-8fa8-01f2d64bb51a] Running
	I0717 00:48:21.749771   23443 system_pods.go:89] "etcd-ha-029113" [10122569-9dc1-4680-8d11-aa7d4c719cec] Running
	I0717 00:48:21.749777   23443 system_pods.go:89] "etcd-ha-029113-m02" [a0f65752-ddcf-493d-bc0b-e4cb2ac8d635] Running
	I0717 00:48:21.749782   23443 system_pods.go:89] "etcd-ha-029113-m03" [9afc47a1-ab83-4976-bd8b-d40aa6360f2d] Running
	I0717 00:48:21.749788   23443 system_pods.go:89] "kindnet-8xg7d" [a612c634-49ef-4357-9b36-f5cc6604bdd7] Running
	I0717 00:48:21.749794   23443 system_pods.go:89] "kindnet-k2jgh" [8a8e5ffe-9541-4736-9584-b49727b4753e] Running
	I0717 00:48:21.749800   23443 system_pods.go:89] "kindnet-k7vzq" [8198e4a4-080e-482a-a0b3-58e796bdd230] Running
	I0717 00:48:21.749809   23443 system_pods.go:89] "kube-apiserver-ha-029113" [167d337c-6406-4f80-8a60-aebdca26066b] Running
	I0717 00:48:21.749815   23443 system_pods.go:89] "kube-apiserver-ha-029113-m02" [d64aa0f0-e41f-4a5e-b4fe-48665061673e] Running
	I0717 00:48:21.749825   23443 system_pods.go:89] "kube-apiserver-ha-029113-m03" [0b4ea48e-60dc-44ed-8d5d-1159f866bc24] Running
	I0717 00:48:21.749830   23443 system_pods.go:89] "kube-controller-manager-ha-029113" [8f1ee225-f6a3-4943-976a-9cc14607a654] Running
	I0717 00:48:21.749835   23443 system_pods.go:89] "kube-controller-manager-ha-029113-m02" [d180826c-b18e-49a7-8a1a-576c1a64fd51] Running
	I0717 00:48:21.749841   23443 system_pods.go:89] "kube-controller-manager-ha-029113-m03" [993c477b-441b-46a1-85b8-c8ba74df2f80] Running
	I0717 00:48:21.749845   23443 system_pods.go:89] "kube-proxy-2wz5p" [285b947d-fa11-40fb-befa-1fa4451787d4] Running
	I0717 00:48:21.749852   23443 system_pods.go:89] "kube-proxy-hg2kp" [db9243f4-bcc0-406a-a8f2-ccdbc00f6341] Running
	I0717 00:48:21.749856   23443 system_pods.go:89] "kube-proxy-pfdt9" [d5f82192-14de-46c6-b3f4-38d34b9e828a] Running
	I0717 00:48:21.749861   23443 system_pods.go:89] "kube-scheduler-ha-029113" [e3b5629d-5647-437e-a87c-0c91f2cd26d7] Running
	I0717 00:48:21.749866   23443 system_pods.go:89] "kube-scheduler-ha-029113-m02" [0f986464-8d17-4727-906b-4d8c58afbe5d] Running
	I0717 00:48:21.749871   23443 system_pods.go:89] "kube-scheduler-ha-029113-m03" [8a322ad0-c9fa-4586-9051-5b18efa5a9c0] Running
	I0717 00:48:21.749876   23443 system_pods.go:89] "kube-vip-ha-029113" [985763eb-2a45-4820-a3db-e2af6d9291e0] Running
	I0717 00:48:21.749881   23443 system_pods.go:89] "kube-vip-ha-029113-m02" [0d64dace-cdb3-4abb-8d92-b205dc611777] Running
	I0717 00:48:21.749886   23443 system_pods.go:89] "kube-vip-ha-029113-m03" [ca077479-311a-4e1a-b143-55678a21f744] Running
	I0717 00:48:21.749894   23443 system_pods.go:89] "storage-provisioner" [b9f04e5d-469e-4432-bd31-dbe772194f84] Running
	I0717 00:48:21.749905   23443 system_pods.go:126] duration metric: took 210.833721ms to wait for k8s-apps to be running ...
	I0717 00:48:21.749918   23443 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:48:21.749962   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:48:21.764295   23443 system_svc.go:56] duration metric: took 14.372456ms WaitForService to wait for kubelet
	I0717 00:48:21.764316   23443 kubeadm.go:582] duration metric: took 24.086772769s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:48:21.764331   23443 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:48:21.934745   23443 request.go:629] Waited for 170.341169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes
	I0717 00:48:21.934808   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes
	I0717 00:48:21.934815   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.934826   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.934834   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.938182   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:21.939223   23443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:48:21.939242   23443 node_conditions.go:123] node cpu capacity is 2
	I0717 00:48:21.939252   23443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:48:21.939256   23443 node_conditions.go:123] node cpu capacity is 2
	I0717 00:48:21.939262   23443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:48:21.939265   23443 node_conditions.go:123] node cpu capacity is 2
	I0717 00:48:21.939269   23443 node_conditions.go:105] duration metric: took 174.93377ms to run NodePressure ...
	I0717 00:48:21.939279   23443 start.go:241] waiting for startup goroutines ...
	I0717 00:48:21.939298   23443 start.go:255] writing updated cluster config ...
	I0717 00:48:21.939565   23443 ssh_runner.go:195] Run: rm -f paused
	I0717 00:48:21.989260   23443 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:48:21.991460   23443 out.go:177] * Done! kubectl is now configured to use "ha-029113" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.520516265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177525520492222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76e9cdd7-f757-4cd1-b848-55b50480faf1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.521617660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa3254d7-94a7-41e0-affe-fdd85eb02edd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.521697297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa3254d7-94a7-41e0-affe-fdd85eb02edd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.522082329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177307303856239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba7b13f793c3a06d8fbfe335c9983449c36750149da746239b5d30f43f9e80d,PodSandboxId:50f310bc4d109b91ab1bdfc5a369ea5b936bcdfab1c3c3e494b33bc91202cdf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177077814345806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077742709925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077777210066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-67
92-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CO
NTAINER_RUNNING,CreatedAt:1721177065689728330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172117706
0624674491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b360fa2cf3fbbfbc9242efc6915192666c7dd6e4307f909d862b874fbaab69,PodSandboxId:bc3706b14039859f793eac4e8624e7818234de71ca60cf085454b03586bf9d2a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17211770460
87746328,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7182fcebcafa632e8046b9a13a66b9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177041034207282,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177040986161344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425a9fc13cce841865da7956f1f32a375623313826ea8da126557d78f754b28c,PodSandboxId:42a4c594e59973a4c2533efc093434e5573531710ce6a4ecdc3cc9ed647d8158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177041005671448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0,PodSandboxId:ab2a446417d15b6e99d85160ca9be5ee4aa3f76cd5af1ddc919812f3b8e304ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177040980708332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa3254d7-94a7-41e0-affe-fdd85eb02edd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.568284917Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e4490e4-32df-4da9-b339-0c8a804f402d name=/runtime.v1.RuntimeService/Version
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.568376739Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e4490e4-32df-4da9-b339-0c8a804f402d name=/runtime.v1.RuntimeService/Version
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.569579497Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=936a857a-bbb2-4d18-86b5-9558517ba31c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.570106104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177525570070831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=936a857a-bbb2-4d18-86b5-9558517ba31c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.570931715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8280756d-690f-42af-8e54-6d2f4432b584 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.570997991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8280756d-690f-42af-8e54-6d2f4432b584 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.571364626Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177307303856239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba7b13f793c3a06d8fbfe335c9983449c36750149da746239b5d30f43f9e80d,PodSandboxId:50f310bc4d109b91ab1bdfc5a369ea5b936bcdfab1c3c3e494b33bc91202cdf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177077814345806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077742709925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077777210066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-67
92-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CO
NTAINER_RUNNING,CreatedAt:1721177065689728330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172117706
0624674491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b360fa2cf3fbbfbc9242efc6915192666c7dd6e4307f909d862b874fbaab69,PodSandboxId:bc3706b14039859f793eac4e8624e7818234de71ca60cf085454b03586bf9d2a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17211770460
87746328,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7182fcebcafa632e8046b9a13a66b9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177041034207282,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177040986161344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425a9fc13cce841865da7956f1f32a375623313826ea8da126557d78f754b28c,PodSandboxId:42a4c594e59973a4c2533efc093434e5573531710ce6a4ecdc3cc9ed647d8158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177041005671448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0,PodSandboxId:ab2a446417d15b6e99d85160ca9be5ee4aa3f76cd5af1ddc919812f3b8e304ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177040980708332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8280756d-690f-42af-8e54-6d2f4432b584 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.607007396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ddd0d7c-a9b7-4df7-9aa1-36d19f4e2874 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.607196897Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ddd0d7c-a9b7-4df7-9aa1-36d19f4e2874 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.608535673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=520c5b9d-a0b9-4d9d-8935-3fd16e90c43d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.609281741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177525609238574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=520c5b9d-a0b9-4d9d-8935-3fd16e90c43d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.609774125Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ab35b10-50fa-458e-90df-c0743d3324e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.609966770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ab35b10-50fa-458e-90df-c0743d3324e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.610280181Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177307303856239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba7b13f793c3a06d8fbfe335c9983449c36750149da746239b5d30f43f9e80d,PodSandboxId:50f310bc4d109b91ab1bdfc5a369ea5b936bcdfab1c3c3e494b33bc91202cdf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177077814345806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077742709925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077777210066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-67
92-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CO
NTAINER_RUNNING,CreatedAt:1721177065689728330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172117706
0624674491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b360fa2cf3fbbfbc9242efc6915192666c7dd6e4307f909d862b874fbaab69,PodSandboxId:bc3706b14039859f793eac4e8624e7818234de71ca60cf085454b03586bf9d2a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17211770460
87746328,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7182fcebcafa632e8046b9a13a66b9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177041034207282,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177040986161344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425a9fc13cce841865da7956f1f32a375623313826ea8da126557d78f754b28c,PodSandboxId:42a4c594e59973a4c2533efc093434e5573531710ce6a4ecdc3cc9ed647d8158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177041005671448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0,PodSandboxId:ab2a446417d15b6e99d85160ca9be5ee4aa3f76cd5af1ddc919812f3b8e304ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177040980708332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ab35b10-50fa-458e-90df-c0743d3324e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.648579897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0743c49-a703-4982-ac5d-26f61d29680e name=/runtime.v1.RuntimeService/Version
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.648668373Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0743c49-a703-4982-ac5d-26f61d29680e name=/runtime.v1.RuntimeService/Version
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.650946394Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a102f381-4203-4522-b52e-4f63a3bd4a06 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.654622023Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177525654596804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a102f381-4203-4522-b52e-4f63a3bd4a06 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.656220711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4a608e9-1b93-4d75-bdfc-f276ba55e770 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.656472360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4a608e9-1b93-4d75-bdfc-f276ba55e770 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:52:05 ha-029113 crio[675]: time="2024-07-17 00:52:05.657165115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177307303856239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba7b13f793c3a06d8fbfe335c9983449c36750149da746239b5d30f43f9e80d,PodSandboxId:50f310bc4d109b91ab1bdfc5a369ea5b936bcdfab1c3c3e494b33bc91202cdf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177077814345806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077742709925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077777210066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-67
92-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CO
NTAINER_RUNNING,CreatedAt:1721177065689728330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172117706
0624674491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b360fa2cf3fbbfbc9242efc6915192666c7dd6e4307f909d862b874fbaab69,PodSandboxId:bc3706b14039859f793eac4e8624e7818234de71ca60cf085454b03586bf9d2a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17211770460
87746328,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7182fcebcafa632e8046b9a13a66b9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177041034207282,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177040986161344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425a9fc13cce841865da7956f1f32a375623313826ea8da126557d78f754b28c,PodSandboxId:42a4c594e59973a4c2533efc093434e5573531710ce6a4ecdc3cc9ed647d8158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177041005671448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0,PodSandboxId:ab2a446417d15b6e99d85160ca9be5ee4aa3f76cd5af1ddc919812f3b8e304ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177040980708332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4a608e9-1b93-4d75-bdfc-f276ba55e770 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf4870ffc6ba7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a45c7f17109af       busybox-fc5497c4f-pf5xn
	4ba7b13f793c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   50f310bc4d109       storage-provisioner
	708012203a1a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   9323719ef6547       coredns-7db6d8ff4d-62m67
	0f3b600dde660       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   f8a5889bb1d2b       coredns-7db6d8ff4d-xdlls
	14ce89e605287       docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381    7 minutes ago       Running             kindnet-cni               0                   a30304f1d93be       kindnet-8xg7d
	21b3cbbc53732       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      7 minutes ago       Running             kube-proxy                0                   9fc93d7901e92       kube-proxy-hg2kp
	c8b360fa2cf3f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   bc3706b140398       kube-vip-ha-029113
	535a2b743f28f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   9dca109899a3f       etcd-ha-029113
	425a9fc13cce8       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      8 minutes ago       Running             kube-apiserver            0                   42a4c594e5997       kube-apiserver-ha-029113
	af1a2d97ac6f8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      8 minutes ago       Running             kube-scheduler            0                   5eb5a4397caa3       kube-scheduler-ha-029113
	8ad5061362647       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      8 minutes ago       Running             kube-controller-manager   0                   ab2a446417d15       kube-controller-manager-ha-029113
	
	
	==> coredns [0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa] <==
	[INFO] 10.244.0.4:51874 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000567681s
	[INFO] 10.244.2.2:49111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126235s
	[INFO] 10.244.2.2:36462 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000086647s
	[INFO] 10.244.2.2:55125 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000067538s
	[INFO] 10.244.1.2:39895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159278s
	[INFO] 10.244.1.2:60685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000209662s
	[INFO] 10.244.1.2:59157 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00911229s
	[INFO] 10.244.0.4:33726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164542s
	[INFO] 10.244.0.4:35638 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107633s
	[INFO] 10.244.0.4:36083 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148234s
	[INFO] 10.244.0.4:49455 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157722s
	[INFO] 10.244.2.2:43892 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122973s
	[INFO] 10.244.2.2:45729 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013946s
	[INFO] 10.244.0.4:55198 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100375s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106412s
	[INFO] 10.244.0.4:37401 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124966s
	[INFO] 10.244.0.4:60799 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012109s
	[INFO] 10.244.2.2:34189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127044s
	[INFO] 10.244.2.2:42164 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116232s
	[INFO] 10.244.2.2:45045 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090238s
	[INFO] 10.244.1.2:51035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200282s
	[INFO] 10.244.1.2:55956 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000190607s
	[INFO] 10.244.1.2:54538 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177763s
	[INFO] 10.244.0.4:33888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013825s
	[INFO] 10.244.2.2:47245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251032s
	
	
	==> coredns [708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc] <==
	[INFO] 10.244.1.2:35563 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000247156s
	[INFO] 10.244.1.2:58955 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217672s
	[INFO] 10.244.1.2:58564 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129741s
	[INFO] 10.244.0.4:42072 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00197568s
	[INFO] 10.244.0.4:42572 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001184183s
	[INFO] 10.244.0.4:59867 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056093s
	[INFO] 10.244.0.4:34082 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003321s
	[INFO] 10.244.2.2:43902 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001968463s
	[INFO] 10.244.2.2:54035 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207692s
	[INFO] 10.244.2.2:33997 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001225386s
	[INFO] 10.244.2.2:45029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109563s
	[INFO] 10.244.2.2:39017 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092433s
	[INFO] 10.244.2.2:54230 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169232s
	[INFO] 10.244.1.2:47885 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195059s
	[INFO] 10.244.1.2:52609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101063s
	[INFO] 10.244.1.2:45870 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090685s
	[INFO] 10.244.1.2:54516 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081368s
	[INFO] 10.244.2.2:33988 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080469s
	[INFO] 10.244.1.2:34772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000287318s
	[INFO] 10.244.0.4:35803 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085391s
	[INFO] 10.244.0.4:50190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000162301s
	[INFO] 10.244.0.4:40910 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130903s
	[INFO] 10.244.2.2:33875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129913s
	[INFO] 10.244.2.2:51223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090521s
	[INFO] 10.244.2.2:58679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073592s
	
	
	==> describe nodes <==
	Name:               ha-029113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_44_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:44:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:51:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:48:43 +0000   Wed, 17 Jul 2024 00:44:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:48:43 +0000   Wed, 17 Jul 2024 00:44:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:48:43 +0000   Wed, 17 Jul 2024 00:44:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:48:43 +0000   Wed, 17 Jul 2024 00:44:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-029113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a51546f0529f4ddaa3a150daaabbe791
	  System UUID:                a51546f0-529f-4dda-a3a1-50daaabbe791
	  Boot ID:                    644e2f47-3b52-421d-bf4d-394d43757773
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pf5xn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 coredns-7db6d8ff4d-62m67             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m45s
	  kube-system                 coredns-7db6d8ff4d-xdlls             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m45s
	  kube-system                 etcd-ha-029113                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m58s
	  kube-system                 kindnet-8xg7d                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m45s
	  kube-system                 kube-apiserver-ha-029113             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 kube-controller-manager-ha-029113    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 kube-proxy-hg2kp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-scheduler-ha-029113             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 kube-vip-ha-029113                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m44s  kube-proxy       
	  Normal  Starting                 7m58s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m58s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m58s  kubelet          Node ha-029113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m58s  kubelet          Node ha-029113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m58s  kubelet          Node ha-029113 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m46s  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal  NodeReady                7m28s  kubelet          Node ha-029113 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal  RegisteredNode           3m54s  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	
	
	Name:               ha-029113-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_46_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:49:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 00:48:40 +0000   Wed, 17 Jul 2024 00:50:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 00:48:40 +0000   Wed, 17 Jul 2024 00:50:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 00:48:40 +0000   Wed, 17 Jul 2024 00:50:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 00:48:40 +0000   Wed, 17 Jul 2024 00:50:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.166
	  Hostname:    ha-029113-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 caba57241163431db23fb698d4481f00
	  System UUID:                caba5724-1163-431d-b23f-b698d4481f00
	  Boot ID:                    1849ca60-159d-4fda-b3e8-c6287316fa16
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l4ctd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 etcd-ha-029113-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m27s
	  kube-system                 kindnet-k7vzq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m29s
	  kube-system                 kube-apiserver-ha-029113-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kube-system                 kube-controller-manager-ha-029113-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-proxy-2wz5p                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 kube-scheduler-ha-029113-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-vip-ha-029113-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m29s (x8 over 5m29s)  kubelet          Node ha-029113-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s (x8 over 5m29s)  kubelet          Node ha-029113-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s (x7 over 5m29s)  kubelet          Node ha-029113-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-029113-m02 status is now: NodeNotReady
	
	
	Name:               ha-029113-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_47_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:47:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:51:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:48:55 +0000   Wed, 17 Jul 2024 00:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:48:55 +0000   Wed, 17 Jul 2024 00:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:48:55 +0000   Wed, 17 Jul 2024 00:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:48:55 +0000   Wed, 17 Jul 2024 00:48:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-029113-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2e1b2e5e3744938b38fb857e0123a96
	  System UUID:                d2e1b2e5-e374-4938-b38f-b857e0123a96
	  Boot ID:                    1470bc32-c0a6-4d87-8e4e-b7ae7580ad8b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w8w7k                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 etcd-ha-029113-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m10s
	  kube-system                 kindnet-k2jgh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m12s
	  kube-system                 kube-apiserver-ha-029113-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-controller-manager-ha-029113-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-pfdt9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  kube-system                 kube-scheduler-ha-029113-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-vip-ha-029113-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m13s)  kubelet          Node ha-029113-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m13s)  kubelet          Node ha-029113-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m13s)  kubelet          Node ha-029113-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	
	
	Name:               ha-029113-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_49_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:49:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:51:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:49:34 +0000   Wed, 17 Jul 2024 00:49:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:49:34 +0000   Wed, 17 Jul 2024 00:49:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:49:34 +0000   Wed, 17 Jul 2024 00:49:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:49:34 +0000   Wed, 17 Jul 2024 00:49:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-029113-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6434efc175e64e719bbbb464b6a52834
	  System UUID:                6434efc1-75e6-4e71-9bbb-b464b6a52834
	  Boot ID:                    4d92357f-baa3-4da1-81cb-b140aac67591
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8d2dk       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m2s
	  kube-system                 kube-proxy-m559l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-029113-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-029113-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-029113-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-029113-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul17 00:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049715] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039154] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.503743] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.101957] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.559408] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.079107] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.061657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066531] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.165658] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.134006] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.290239] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.155347] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.001195] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.055206] kauditd_printk_skb: 158 callbacks suppressed
	[Jul17 00:44] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.000074] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +6.568819] kauditd_printk_skb: 23 callbacks suppressed
	[ +12.108545] kauditd_printk_skb: 29 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11] <==
	{"level":"warn","ts":"2024-07-17T00:52:05.921465Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:05.93133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:05.940649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:05.946043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:05.957688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:05.970257Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:05.991153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.003096Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.013112Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.020488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.029061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.038945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.050034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.054213Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.057892Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.066417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.07158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.076489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.080688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.084055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.089881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.097983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.105944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.121513Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:52:06.159359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:52:06 up 8 min,  0 users,  load average: 0.03, 0.22, 0.14
	Linux ha-029113 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e] <==
	I0717 00:51:26.817287       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:51:36.824963       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:51:36.825059       1 main.go:303] handling current node
	I0717 00:51:36.825085       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:51:36.825103       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:51:36.825276       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:51:36.825298       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:51:36.825377       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:51:36.825396       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:51:46.823422       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:51:46.823504       1 main.go:303] handling current node
	I0717 00:51:46.823531       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:51:46.823548       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:51:46.823706       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:51:46.823746       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:51:46.823901       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:51:46.823937       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:51:56.818668       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:51:56.818777       1 main.go:303] handling current node
	I0717 00:51:56.818887       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:51:56.818913       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:51:56.819103       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:51:56.819127       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:51:56.819215       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:51:56.819236       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [425a9fc13cce841865da7956f1f32a375623313826ea8da126557d78f754b28c] <==
	I0717 00:44:05.783851       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:44:05.823239       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.95]
	I0717 00:44:05.825457       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:44:05.880453       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:44:05.889683       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 00:44:07.296728       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:44:07.315459       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:44:07.478497       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:44:19.981153       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 00:44:20.058042       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 00:48:28.655552       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58912: use of closed network connection
	E0717 00:48:28.836739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58918: use of closed network connection
	E0717 00:48:29.024613       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58936: use of closed network connection
	E0717 00:48:29.231161       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58954: use of closed network connection
	E0717 00:48:29.417082       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58962: use of closed network connection
	E0717 00:48:29.597423       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58968: use of closed network connection
	E0717 00:48:29.770061       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53198: use of closed network connection
	E0717 00:48:29.966015       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53228: use of closed network connection
	E0717 00:48:30.140410       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53254: use of closed network connection
	E0717 00:48:30.431530       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53278: use of closed network connection
	E0717 00:48:30.598063       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53296: use of closed network connection
	E0717 00:48:30.787058       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53312: use of closed network connection
	E0717 00:48:30.954768       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53324: use of closed network connection
	E0717 00:48:31.133598       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53342: use of closed network connection
	E0717 00:48:31.324604       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53360: use of closed network connection
	
	
	==> kube-controller-manager [8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0] <==
	I0717 00:47:54.021564       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-029113-m03\" does not exist"
	I0717 00:47:54.040384       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-029113-m03" podCIDRs=["10.244.2.0/24"]
	I0717 00:47:55.009721       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-029113-m03"
	I0717 00:48:22.927156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.365216ms"
	I0717 00:48:23.083235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="156.007355ms"
	I0717 00:48:23.319394       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="235.658965ms"
	I0717 00:48:23.381480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.96215ms"
	I0717 00:48:23.381728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.361µs"
	I0717 00:48:23.985438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.076µs"
	I0717 00:48:27.196292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.183µs"
	I0717 00:48:27.498334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.968656ms"
	I0717 00:48:27.498416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.311µs"
	I0717 00:48:27.690412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.025701ms"
	I0717 00:48:27.690515       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.229µs"
	I0717 00:48:28.159492       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.357821ms"
	I0717 00:48:28.159609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.001µs"
	E0717 00:49:04.181227       1 certificate_controller.go:146] Sync csr-ts9s6 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-ts9s6": the object has been modified; please apply your changes to the latest version and try again
	E0717 00:49:04.200175       1 certificate_controller.go:146] Sync csr-ts9s6 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-ts9s6": the object has been modified; please apply your changes to the latest version and try again
	I0717 00:49:04.289902       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-029113-m04\" does not exist"
	I0717 00:49:04.332777       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-029113-m04" podCIDRs=["10.244.3.0/24"]
	I0717 00:49:05.020575       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-029113-m04"
	I0717 00:49:25.719496       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-029113-m04"
	I0717 00:50:22.096635       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-029113-m04"
	I0717 00:50:22.216154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.144368ms"
	I0717 00:50:22.216282       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.008µs"
	
	
	==> kube-proxy [21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909] <==
	I0717 00:44:20.962872       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:44:21.008068       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.95"]
	I0717 00:44:21.079783       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:44:21.079872       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:44:21.079899       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:44:21.090869       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:44:21.091523       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:44:21.091559       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:44:21.102362       1 config.go:192] "Starting service config controller"
	I0717 00:44:21.102889       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:44:21.104936       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:44:21.104947       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:44:21.118916       1 config.go:319] "Starting node config controller"
	I0717 00:44:21.118947       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:44:21.204843       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:44:21.205001       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:44:21.218983       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85] <==
	W0717 00:44:05.319128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:44:05.319155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:44:05.327975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:44:05.328915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:44:05.337181       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:44:05.337203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0717 00:44:06.724112       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 00:48:22.923258       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-w8w7k\": pod busybox-fc5497c4f-w8w7k is already assigned to node \"ha-029113-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-w8w7k" node="ha-029113-m03"
	E0717 00:48:22.923515       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7490d1f3-1a14-41f1-a79b-451dd21902f7(default/busybox-fc5497c4f-w8w7k) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-w8w7k"
	E0717 00:48:22.923620       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-w8w7k\": pod busybox-fc5497c4f-w8w7k is already assigned to node \"ha-029113-m03\"" pod="default/busybox-fc5497c4f-w8w7k"
	I0717 00:48:22.923687       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-w8w7k" node="ha-029113-m03"
	E0717 00:48:22.931036       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pf5xn\": pod busybox-fc5497c4f-pf5xn is already assigned to node \"ha-029113\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pf5xn" node="ha-029113"
	E0717 00:48:22.931118       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c25795f2-3205-495b-83b1-e3afd79b87b5(default/busybox-fc5497c4f-pf5xn) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-pf5xn"
	E0717 00:48:22.931139       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pf5xn\": pod busybox-fc5497c4f-pf5xn is already assigned to node \"ha-029113\"" pod="default/busybox-fc5497c4f-pf5xn"
	I0717 00:48:22.931160       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-pf5xn" node="ha-029113"
	E0717 00:49:04.360621       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mxgns\": pod kindnet-mxgns is already assigned to node \"ha-029113-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mxgns" node="ha-029113-m04"
	E0717 00:49:04.361827       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mxgns\": pod kindnet-mxgns is already assigned to node \"ha-029113-m04\"" pod="kube-system/kindnet-mxgns"
	E0717 00:49:04.377510       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-m559l\": pod kube-proxy-m559l is already assigned to node \"ha-029113-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-m559l" node="ha-029113-m04"
	E0717 00:49:04.378263       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4bfab6d9-01f3-4918-9ea6-0dcd75f65a06(kube-system/kube-proxy-m559l) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-m559l"
	E0717 00:49:04.378516       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-m559l\": pod kube-proxy-m559l is already assigned to node \"ha-029113-m04\"" pod="kube-system/kube-proxy-m559l"
	I0717 00:49:04.378675       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-m559l" node="ha-029113-m04"
	E0717 00:49:04.417728       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rlrzj\": pod kindnet-rlrzj is already assigned to node \"ha-029113-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rlrzj" node="ha-029113-m04"
	E0717 00:49:04.417898       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 015fb38f-0f76-4843-81fc-1eaa7fcd0c79(kube-system/kindnet-rlrzj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rlrzj"
	E0717 00:49:04.417990       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rlrzj\": pod kindnet-rlrzj is already assigned to node \"ha-029113-m04\"" pod="kube-system/kindnet-rlrzj"
	I0717 00:49:04.418024       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rlrzj" node="ha-029113-m04"
	
	
	==> kubelet <==
	Jul 17 00:47:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:47:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:47:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:48:07 ha-029113 kubelet[1354]: E0717 00:48:07.513034    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:48:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:48:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:48:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:48:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:48:22 ha-029113 kubelet[1354]: I0717 00:48:22.888423    1354 topology_manager.go:215] "Topology Admit Handler" podUID="c25795f2-3205-495b-83b1-e3afd79b87b5" podNamespace="default" podName="busybox-fc5497c4f-pf5xn"
	Jul 17 00:48:23 ha-029113 kubelet[1354]: I0717 00:48:23.024683    1354 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqlnr\" (UniqueName: \"kubernetes.io/projected/c25795f2-3205-495b-83b1-e3afd79b87b5-kube-api-access-pqlnr\") pod \"busybox-fc5497c4f-pf5xn\" (UID: \"c25795f2-3205-495b-83b1-e3afd79b87b5\") " pod="default/busybox-fc5497c4f-pf5xn"
	Jul 17 00:49:07 ha-029113 kubelet[1354]: E0717 00:49:07.511734    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:49:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:49:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:49:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:49:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:50:07 ha-029113 kubelet[1354]: E0717 00:50:07.512125    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:50:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:50:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:50:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:50:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:51:07 ha-029113 kubelet[1354]: E0717 00:51:07.511979    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:51:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:51:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:51:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:51:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-029113 -n ha-029113
helpers_test.go:261: (dbg) Run:  kubectl --context ha-029113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr: exit status 3 (3.209609455s)

                                                
                                                
-- stdout --
	ha-029113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-029113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:52:10.583319   28599 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:52:10.583429   28599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:10.583439   28599 out.go:304] Setting ErrFile to fd 2...
	I0717 00:52:10.583445   28599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:10.583618   28599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:52:10.583784   28599 out.go:298] Setting JSON to false
	I0717 00:52:10.583817   28599 mustload.go:65] Loading cluster: ha-029113
	I0717 00:52:10.583853   28599 notify.go:220] Checking for updates...
	I0717 00:52:10.584190   28599 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:52:10.584205   28599 status.go:255] checking status of ha-029113 ...
	I0717 00:52:10.584572   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:10.584686   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:10.603896   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37999
	I0717 00:52:10.604249   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:10.604844   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:10.604884   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:10.605267   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:10.605443   28599 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:52:10.607007   28599 status.go:330] ha-029113 host status = "Running" (err=<nil>)
	I0717 00:52:10.607023   28599 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:10.607362   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:10.607395   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:10.621420   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0717 00:52:10.621830   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:10.622273   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:10.622296   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:10.622678   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:10.622856   28599 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:52:10.625295   28599 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:10.625652   28599 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:10.625679   28599 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:10.625820   28599 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:10.626114   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:10.626165   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:10.640339   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0717 00:52:10.640683   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:10.641067   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:10.641084   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:10.641386   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:10.641549   28599 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:52:10.641753   28599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:10.641778   28599 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:52:10.644182   28599 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:10.644543   28599 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:10.644575   28599 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:10.644634   28599 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:52:10.644782   28599 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:52:10.644917   28599 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:52:10.645053   28599 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:52:10.722264   28599 ssh_runner.go:195] Run: systemctl --version
	I0717 00:52:10.728558   28599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:10.751000   28599 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:10.751026   28599 api_server.go:166] Checking apiserver status ...
	I0717 00:52:10.751058   28599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:10.764677   28599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0717 00:52:10.777859   28599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:10.777917   28599 ssh_runner.go:195] Run: ls
	I0717 00:52:10.782808   28599 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:10.787144   28599 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:10.787165   28599 status.go:422] ha-029113 apiserver status = Running (err=<nil>)
	I0717 00:52:10.787174   28599 status.go:257] ha-029113 status: &{Name:ha-029113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:10.787191   28599 status.go:255] checking status of ha-029113-m02 ...
	I0717 00:52:10.787501   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:10.787537   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:10.802718   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43151
	I0717 00:52:10.803122   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:10.803572   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:10.803593   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:10.803888   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:10.804066   28599 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:52:10.805571   28599 status.go:330] ha-029113-m02 host status = "Running" (err=<nil>)
	I0717 00:52:10.805584   28599 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:10.805918   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:10.805954   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:10.820209   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34215
	I0717 00:52:10.820556   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:10.821003   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:10.821026   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:10.821312   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:10.821486   28599 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:52:10.824196   28599 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:10.824580   28599 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:10.824608   28599 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:10.824707   28599 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:10.824984   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:10.825017   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:10.839580   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34021
	I0717 00:52:10.839997   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:10.840415   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:10.840436   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:10.840741   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:10.840923   28599 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:52:10.841087   28599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:10.841105   28599 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:52:10.843696   28599 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:10.844093   28599 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:10.844118   28599 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:10.844194   28599 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:52:10.844365   28599 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:52:10.844503   28599 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:52:10.844619   28599 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	W0717 00:52:13.406861   28599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.166:22: connect: no route to host
	W0717 00:52:13.406950   28599 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	E0717 00:52:13.406969   28599 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:13.406978   28599 status.go:257] ha-029113-m02 status: &{Name:ha-029113-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:52:13.407009   28599 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:13.407019   28599 status.go:255] checking status of ha-029113-m03 ...
	I0717 00:52:13.407361   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:13.407419   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:13.421661   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36471
	I0717 00:52:13.422139   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:13.422666   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:13.422691   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:13.422983   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:13.423153   28599 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:52:13.424646   28599 status.go:330] ha-029113-m03 host status = "Running" (err=<nil>)
	I0717 00:52:13.424662   28599 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:13.424953   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:13.424991   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:13.439554   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42075
	I0717 00:52:13.439948   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:13.440373   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:13.440397   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:13.440702   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:13.440885   28599 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:52:13.443427   28599 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:13.443754   28599 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:13.443778   28599 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:13.443879   28599 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:13.444166   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:13.444198   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:13.458529   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
	I0717 00:52:13.458907   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:13.459325   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:13.459346   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:13.459616   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:13.459799   28599 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:52:13.459987   28599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:13.460010   28599 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:52:13.462353   28599 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:13.462842   28599 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:13.462867   28599 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:13.462954   28599 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:52:13.463106   28599 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:52:13.463230   28599 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:52:13.463347   28599 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:52:13.546759   28599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:13.561295   28599 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:13.561319   28599 api_server.go:166] Checking apiserver status ...
	I0717 00:52:13.561358   28599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:13.575229   28599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup
	W0717 00:52:13.585765   28599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:13.585814   28599 ssh_runner.go:195] Run: ls
	I0717 00:52:13.590456   28599 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:13.596297   28599 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:13.596320   28599 status.go:422] ha-029113-m03 apiserver status = Running (err=<nil>)
	I0717 00:52:13.596331   28599 status.go:257] ha-029113-m03 status: &{Name:ha-029113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:13.596349   28599 status.go:255] checking status of ha-029113-m04 ...
	I0717 00:52:13.596633   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:13.596681   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:13.611948   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
	I0717 00:52:13.612311   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:13.612736   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:13.612758   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:13.613072   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:13.613290   28599 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:52:13.614751   28599 status.go:330] ha-029113-m04 host status = "Running" (err=<nil>)
	I0717 00:52:13.614766   28599 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:13.615051   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:13.615088   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:13.629879   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0717 00:52:13.630295   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:13.630736   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:13.630768   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:13.631057   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:13.631232   28599 main.go:141] libmachine: (ha-029113-m04) Calling .GetIP
	I0717 00:52:13.633838   28599 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:13.634307   28599 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:13.634339   28599 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:13.634453   28599 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:13.634785   28599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:13.634818   28599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:13.649564   28599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0717 00:52:13.649885   28599 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:13.650289   28599 main.go:141] libmachine: Using API Version  1
	I0717 00:52:13.650308   28599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:13.650609   28599 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:13.650789   28599 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 00:52:13.650969   28599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:13.651006   28599 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 00:52:13.653599   28599 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:13.653964   28599 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:13.653987   28599 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:13.654147   28599 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 00:52:13.654347   28599 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 00:52:13.654564   28599 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 00:52:13.654718   28599 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	I0717 00:52:13.737790   28599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:13.752187   28599 status.go:257] ha-029113-m04 status: &{Name:ha-029113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr: exit status 3 (4.881039337s)

                                                
                                                
-- stdout --
	ha-029113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-029113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:52:15.062743   28700 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:52:15.062831   28700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:15.062838   28700 out.go:304] Setting ErrFile to fd 2...
	I0717 00:52:15.062843   28700 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:15.063031   28700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:52:15.063181   28700 out.go:298] Setting JSON to false
	I0717 00:52:15.063209   28700 mustload.go:65] Loading cluster: ha-029113
	I0717 00:52:15.063248   28700 notify.go:220] Checking for updates...
	I0717 00:52:15.063519   28700 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:52:15.063531   28700 status.go:255] checking status of ha-029113 ...
	I0717 00:52:15.063954   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:15.064013   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:15.082571   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I0717 00:52:15.082961   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:15.083513   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:15.083550   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:15.083940   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:15.084155   28700 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:52:15.085601   28700 status.go:330] ha-029113 host status = "Running" (err=<nil>)
	I0717 00:52:15.085615   28700 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:15.085958   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:15.086005   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:15.101125   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45419
	I0717 00:52:15.101481   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:15.101919   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:15.101943   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:15.102224   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:15.102383   28700 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:52:15.105125   28700 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:15.105527   28700 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:15.105549   28700 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:15.105719   28700 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:15.106006   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:15.106057   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:15.120576   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
	I0717 00:52:15.121004   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:15.121496   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:15.121515   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:15.121892   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:15.122069   28700 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:52:15.122302   28700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:15.122343   28700 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:52:15.125478   28700 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:15.125990   28700 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:15.126014   28700 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:15.126167   28700 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:52:15.126355   28700 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:52:15.126521   28700 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:52:15.126697   28700 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:52:15.206310   28700 ssh_runner.go:195] Run: systemctl --version
	I0717 00:52:15.212728   28700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:15.228394   28700 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:15.228427   28700 api_server.go:166] Checking apiserver status ...
	I0717 00:52:15.228470   28700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:15.246070   28700 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0717 00:52:15.258390   28700 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:15.258450   28700 ssh_runner.go:195] Run: ls
	I0717 00:52:15.263412   28700 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:15.269723   28700 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:15.269750   28700 status.go:422] ha-029113 apiserver status = Running (err=<nil>)
	I0717 00:52:15.269761   28700 status.go:257] ha-029113 status: &{Name:ha-029113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:15.269785   28700 status.go:255] checking status of ha-029113-m02 ...
	I0717 00:52:15.270132   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:15.270176   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:15.285544   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0717 00:52:15.285997   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:15.286489   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:15.286510   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:15.286797   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:15.286955   28700 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:52:15.288450   28700 status.go:330] ha-029113-m02 host status = "Running" (err=<nil>)
	I0717 00:52:15.288478   28700 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:15.288743   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:15.288774   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:15.302997   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39969
	I0717 00:52:15.303360   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:15.303824   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:15.303847   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:15.304183   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:15.304371   28700 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:52:15.307020   28700 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:15.307420   28700 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:15.307447   28700 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:15.307598   28700 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:15.307883   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:15.307919   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:15.323142   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0717 00:52:15.323599   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:15.324034   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:15.324057   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:15.324335   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:15.324462   28700 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:52:15.324638   28700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:15.324655   28700 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:52:15.327280   28700 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:15.327820   28700 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:15.327846   28700 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:15.327986   28700 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:52:15.328143   28700 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:52:15.328321   28700 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:52:15.328463   28700 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	W0717 00:52:16.474918   28700 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:16.474992   28700 retry.go:31] will retry after 221.900396ms: dial tcp 192.168.39.166:22: connect: no route to host
	W0717 00:52:19.546883   28700 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.166:22: connect: no route to host
	W0717 00:52:19.546959   28700 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	E0717 00:52:19.546972   28700 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:19.547000   28700 status.go:257] ha-029113-m02 status: &{Name:ha-029113-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:52:19.547021   28700 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:19.547028   28700 status.go:255] checking status of ha-029113-m03 ...
	I0717 00:52:19.547315   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:19.547353   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:19.562785   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0717 00:52:19.563232   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:19.563710   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:19.563732   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:19.564022   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:19.564212   28700 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:52:19.565904   28700 status.go:330] ha-029113-m03 host status = "Running" (err=<nil>)
	I0717 00:52:19.565921   28700 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:19.566242   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:19.566291   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:19.582250   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0717 00:52:19.582646   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:19.583107   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:19.583129   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:19.583429   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:19.583600   28700 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:52:19.586334   28700 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:19.586738   28700 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:19.586770   28700 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:19.587056   28700 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:19.587444   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:19.587488   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:19.602711   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40279
	I0717 00:52:19.603140   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:19.603582   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:19.603601   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:19.603941   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:19.604145   28700 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:52:19.604316   28700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:19.604335   28700 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:52:19.607502   28700 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:19.607971   28700 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:19.607990   28700 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:19.608168   28700 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:52:19.608398   28700 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:52:19.608539   28700 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:52:19.608696   28700 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:52:19.690498   28700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:19.704927   28700 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:19.704960   28700 api_server.go:166] Checking apiserver status ...
	I0717 00:52:19.705006   28700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:19.719237   28700 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup
	W0717 00:52:19.728978   28700 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:19.729051   28700 ssh_runner.go:195] Run: ls
	I0717 00:52:19.733484   28700 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:19.738456   28700 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:19.738475   28700 status.go:422] ha-029113-m03 apiserver status = Running (err=<nil>)
	I0717 00:52:19.738483   28700 status.go:257] ha-029113-m03 status: &{Name:ha-029113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:19.738497   28700 status.go:255] checking status of ha-029113-m04 ...
	I0717 00:52:19.738804   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:19.738839   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:19.754185   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
	I0717 00:52:19.754533   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:19.755011   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:19.755035   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:19.755346   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:19.755540   28700 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:52:19.756983   28700 status.go:330] ha-029113-m04 host status = "Running" (err=<nil>)
	I0717 00:52:19.757001   28700 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:19.757326   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:19.757364   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:19.771918   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0717 00:52:19.772263   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:19.772660   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:19.772678   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:19.773015   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:19.773207   28700 main.go:141] libmachine: (ha-029113-m04) Calling .GetIP
	I0717 00:52:19.776127   28700 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:19.776597   28700 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:19.776629   28700 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:19.776777   28700 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:19.777100   28700 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:19.777137   28700 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:19.791815   28700 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0717 00:52:19.792195   28700 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:19.792615   28700 main.go:141] libmachine: Using API Version  1
	I0717 00:52:19.792642   28700 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:19.792956   28700 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:19.793151   28700 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 00:52:19.793319   28700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:19.793338   28700 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 00:52:19.795874   28700 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:19.796302   28700 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:19.796329   28700 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:19.796465   28700 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 00:52:19.796623   28700 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 00:52:19.796768   28700 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 00:52:19.796900   28700 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	I0717 00:52:19.886152   28700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:19.900435   28700 status.go:257] ha-029113-m04 status: &{Name:ha-029113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr: exit status 3 (4.989093541s)

                                                
                                                
-- stdout --
	ha-029113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-029113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:52:21.097538   28800 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:52:21.097791   28800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:21.097801   28800 out.go:304] Setting ErrFile to fd 2...
	I0717 00:52:21.097805   28800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:21.097987   28800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:52:21.098142   28800 out.go:298] Setting JSON to false
	I0717 00:52:21.098170   28800 mustload.go:65] Loading cluster: ha-029113
	I0717 00:52:21.098304   28800 notify.go:220] Checking for updates...
	I0717 00:52:21.098654   28800 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:52:21.098675   28800 status.go:255] checking status of ha-029113 ...
	I0717 00:52:21.099142   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:21.099193   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:21.117423   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42737
	I0717 00:52:21.117852   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:21.118485   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:21.118527   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:21.118899   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:21.119082   28800 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:52:21.120631   28800 status.go:330] ha-029113 host status = "Running" (err=<nil>)
	I0717 00:52:21.120643   28800 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:21.120912   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:21.120951   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:21.136305   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40013
	I0717 00:52:21.136773   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:21.137226   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:21.137243   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:21.137519   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:21.137670   28800 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:52:21.140323   28800 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:21.140668   28800 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:21.140696   28800 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:21.140858   28800 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:21.141136   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:21.141168   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:21.157151   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0717 00:52:21.157552   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:21.157985   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:21.158008   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:21.158372   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:21.158578   28800 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:52:21.158791   28800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:21.158819   28800 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:52:21.161252   28800 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:21.161653   28800 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:21.161686   28800 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:21.161820   28800 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:52:21.161979   28800 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:52:21.162115   28800 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:52:21.162249   28800 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:52:21.239676   28800 ssh_runner.go:195] Run: systemctl --version
	I0717 00:52:21.245693   28800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:21.260848   28800 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:21.260872   28800 api_server.go:166] Checking apiserver status ...
	I0717 00:52:21.260901   28800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:21.276828   28800 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0717 00:52:21.287038   28800 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:21.287092   28800 ssh_runner.go:195] Run: ls
	I0717 00:52:21.291778   28800 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:21.297168   28800 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:21.297192   28800 status.go:422] ha-029113 apiserver status = Running (err=<nil>)
	I0717 00:52:21.297204   28800 status.go:257] ha-029113 status: &{Name:ha-029113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:21.297225   28800 status.go:255] checking status of ha-029113-m02 ...
	I0717 00:52:21.297567   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:21.297600   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:21.312330   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34495
	I0717 00:52:21.312684   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:21.313150   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:21.313170   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:21.313482   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:21.313678   28800 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:52:21.315191   28800 status.go:330] ha-029113-m02 host status = "Running" (err=<nil>)
	I0717 00:52:21.315206   28800 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:21.315608   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:21.315675   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:21.331592   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0717 00:52:21.332003   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:21.332524   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:21.332550   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:21.332860   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:21.333080   28800 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:52:21.335714   28800 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:21.336200   28800 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:21.336220   28800 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:21.336338   28800 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:21.336668   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:21.336707   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:21.350895   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0717 00:52:21.351293   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:21.351785   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:21.351812   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:21.352095   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:21.352267   28800 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:52:21.352431   28800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:21.352450   28800 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:52:21.354947   28800 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:21.355311   28800 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:21.355344   28800 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:21.355435   28800 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:52:21.355601   28800 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:52:21.355793   28800 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:52:21.355927   28800 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	W0717 00:52:22.618890   28800 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:22.618931   28800 retry.go:31] will retry after 279.909798ms: dial tcp 192.168.39.166:22: connect: no route to host
	W0717 00:52:25.694807   28800 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.166:22: connect: no route to host
	W0717 00:52:25.694899   28800 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	E0717 00:52:25.694915   28800 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:25.694922   28800 status.go:257] ha-029113-m02 status: &{Name:ha-029113-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:52:25.694939   28800 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:25.694954   28800 status.go:255] checking status of ha-029113-m03 ...
	I0717 00:52:25.695300   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:25.695351   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:25.709837   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0717 00:52:25.710261   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:25.710746   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:25.710779   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:25.711082   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:25.711252   28800 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:52:25.712636   28800 status.go:330] ha-029113-m03 host status = "Running" (err=<nil>)
	I0717 00:52:25.712650   28800 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:25.713000   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:25.713058   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:25.727944   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45637
	I0717 00:52:25.728301   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:25.728731   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:25.728752   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:25.729071   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:25.729256   28800 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:52:25.732073   28800 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:25.732428   28800 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:25.732453   28800 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:25.732578   28800 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:25.732862   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:25.732897   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:25.747620   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40041
	I0717 00:52:25.748032   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:25.748466   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:25.748488   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:25.748778   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:25.748935   28800 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:52:25.749101   28800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:25.749120   28800 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:52:25.751537   28800 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:25.751931   28800 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:25.751967   28800 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:25.752091   28800 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:52:25.752265   28800 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:52:25.752405   28800 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:52:25.752552   28800 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:52:25.831617   28800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:25.849104   28800 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:25.849128   28800 api_server.go:166] Checking apiserver status ...
	I0717 00:52:25.849155   28800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:25.869420   28800 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup
	W0717 00:52:25.878883   28800 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:25.878928   28800 ssh_runner.go:195] Run: ls
	I0717 00:52:25.884255   28800 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:25.889078   28800 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:25.889104   28800 status.go:422] ha-029113-m03 apiserver status = Running (err=<nil>)
	I0717 00:52:25.889120   28800 status.go:257] ha-029113-m03 status: &{Name:ha-029113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:25.889136   28800 status.go:255] checking status of ha-029113-m04 ...
	I0717 00:52:25.889435   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:25.889469   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:25.904253   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33205
	I0717 00:52:25.904650   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:25.905052   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:25.905075   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:25.905399   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:25.905544   28800 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:52:25.907051   28800 status.go:330] ha-029113-m04 host status = "Running" (err=<nil>)
	I0717 00:52:25.907074   28800 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:25.907391   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:25.907445   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:25.922283   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I0717 00:52:25.922769   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:25.923214   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:25.923233   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:25.923529   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:25.923710   28800 main.go:141] libmachine: (ha-029113-m04) Calling .GetIP
	I0717 00:52:25.926304   28800 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:25.926737   28800 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:25.926771   28800 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:25.926880   28800 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:25.927174   28800 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:25.927220   28800 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:25.942780   28800 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41361
	I0717 00:52:25.943131   28800 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:25.943517   28800 main.go:141] libmachine: Using API Version  1
	I0717 00:52:25.943537   28800 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:25.943798   28800 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:25.943977   28800 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 00:52:25.944159   28800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:25.944178   28800 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 00:52:25.946633   28800 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:25.947073   28800 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:25.947113   28800 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:25.947225   28800 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 00:52:25.947379   28800 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 00:52:25.947491   28800 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 00:52:25.947578   28800 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	I0717 00:52:26.029746   28800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:26.044884   28800 status.go:257] ha-029113-m04 status: &{Name:ha-029113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr: exit status 3 (3.71693698s)

                                                
                                                
-- stdout --
	ha-029113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-029113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:52:29.342789   28915 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:52:29.343044   28915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:29.343054   28915 out.go:304] Setting ErrFile to fd 2...
	I0717 00:52:29.343058   28915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:29.343262   28915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:52:29.343456   28915 out.go:298] Setting JSON to false
	I0717 00:52:29.343486   28915 mustload.go:65] Loading cluster: ha-029113
	I0717 00:52:29.343518   28915 notify.go:220] Checking for updates...
	I0717 00:52:29.343900   28915 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:52:29.343915   28915 status.go:255] checking status of ha-029113 ...
	I0717 00:52:29.344312   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:29.344356   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:29.364554   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
	I0717 00:52:29.365026   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:29.365530   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:29.365550   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:29.365910   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:29.366114   28915 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:52:29.367587   28915 status.go:330] ha-029113 host status = "Running" (err=<nil>)
	I0717 00:52:29.367602   28915 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:29.367939   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:29.367979   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:29.382432   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0717 00:52:29.382884   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:29.383309   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:29.383331   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:29.383689   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:29.383861   28915 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:52:29.386586   28915 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:29.386892   28915 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:29.386915   28915 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:29.387066   28915 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:29.387363   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:29.387397   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:29.402889   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0717 00:52:29.403357   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:29.403855   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:29.403874   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:29.404201   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:29.404423   28915 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:52:29.404613   28915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:29.404633   28915 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:52:29.407218   28915 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:29.407628   28915 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:29.407658   28915 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:29.407784   28915 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:52:29.407995   28915 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:52:29.408131   28915 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:52:29.408256   28915 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:52:29.486878   28915 ssh_runner.go:195] Run: systemctl --version
	I0717 00:52:29.493556   28915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:29.509569   28915 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:29.509600   28915 api_server.go:166] Checking apiserver status ...
	I0717 00:52:29.509646   28915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:29.524267   28915 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0717 00:52:29.535336   28915 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:29.535384   28915 ssh_runner.go:195] Run: ls
	I0717 00:52:29.540609   28915 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:29.547103   28915 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:29.547127   28915 status.go:422] ha-029113 apiserver status = Running (err=<nil>)
	I0717 00:52:29.547136   28915 status.go:257] ha-029113 status: &{Name:ha-029113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:29.547151   28915 status.go:255] checking status of ha-029113-m02 ...
	I0717 00:52:29.547414   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:29.547446   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:29.562021   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37343
	I0717 00:52:29.562422   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:29.562893   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:29.562912   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:29.563189   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:29.563355   28915 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:52:29.564818   28915 status.go:330] ha-029113-m02 host status = "Running" (err=<nil>)
	I0717 00:52:29.564839   28915 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:29.565126   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:29.565165   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:29.579845   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0717 00:52:29.580262   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:29.580707   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:29.580729   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:29.581030   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:29.581232   28915 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:52:29.584130   28915 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:29.584593   28915 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:29.584610   28915 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:29.584805   28915 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:29.585146   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:29.585187   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:29.599858   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40613
	I0717 00:52:29.600219   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:29.600670   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:29.600688   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:29.600991   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:29.601157   28915 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:52:29.601308   28915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:29.601324   28915 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:52:29.604160   28915 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:29.604515   28915 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:29.604542   28915 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:29.604692   28915 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:52:29.604858   28915 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:52:29.605008   28915 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:52:29.605145   28915 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	W0717 00:52:32.666824   28915 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.166:22: connect: no route to host
	W0717 00:52:32.666956   28915 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	E0717 00:52:32.666980   28915 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:32.666992   28915 status.go:257] ha-029113-m02 status: &{Name:ha-029113-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:52:32.667015   28915 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:32.667030   28915 status.go:255] checking status of ha-029113-m03 ...
	I0717 00:52:32.667348   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:32.667401   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:32.683094   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40763
	I0717 00:52:32.683586   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:32.684106   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:32.684128   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:32.684442   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:32.684657   28915 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:52:32.686375   28915 status.go:330] ha-029113-m03 host status = "Running" (err=<nil>)
	I0717 00:52:32.686391   28915 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:32.686763   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:32.686807   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:32.701629   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0717 00:52:32.701997   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:32.702523   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:32.702569   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:32.702887   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:32.703080   28915 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:52:32.705622   28915 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:32.705995   28915 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:32.706016   28915 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:32.706126   28915 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:32.706406   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:32.706441   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:32.721734   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I0717 00:52:32.722122   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:32.722535   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:32.722574   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:32.722967   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:32.723152   28915 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:52:32.723316   28915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:32.723335   28915 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:52:32.726094   28915 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:32.726539   28915 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:32.726593   28915 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:32.726721   28915 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:52:32.726889   28915 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:52:32.727049   28915 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:52:32.727180   28915 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:52:32.806777   28915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:32.822306   28915 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:32.822337   28915 api_server.go:166] Checking apiserver status ...
	I0717 00:52:32.822372   28915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:32.839222   28915 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup
	W0717 00:52:32.848650   28915 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:32.848708   28915 ssh_runner.go:195] Run: ls
	I0717 00:52:32.853476   28915 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:32.860009   28915 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:32.860035   28915 status.go:422] ha-029113-m03 apiserver status = Running (err=<nil>)
	I0717 00:52:32.860046   28915 status.go:257] ha-029113-m03 status: &{Name:ha-029113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:32.860074   28915 status.go:255] checking status of ha-029113-m04 ...
	I0717 00:52:32.860485   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:32.860531   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:32.875285   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
	I0717 00:52:32.875642   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:32.876097   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:32.876118   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:32.876408   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:32.876598   28915 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:52:32.878042   28915 status.go:330] ha-029113-m04 host status = "Running" (err=<nil>)
	I0717 00:52:32.878057   28915 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:32.878348   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:32.878379   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:32.892903   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0717 00:52:32.893350   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:32.893888   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:32.893916   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:32.894221   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:32.894382   28915 main.go:141] libmachine: (ha-029113-m04) Calling .GetIP
	I0717 00:52:32.897159   28915 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:32.897598   28915 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:32.897615   28915 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:32.897797   28915 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:32.898082   28915 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:32.898121   28915 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:32.913473   28915 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37537
	I0717 00:52:32.913850   28915 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:32.914378   28915 main.go:141] libmachine: Using API Version  1
	I0717 00:52:32.914401   28915 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:32.914724   28915 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:32.914906   28915 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 00:52:32.915094   28915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:32.915111   28915 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 00:52:32.917986   28915 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:32.918402   28915 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:32.918431   28915 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:32.918574   28915 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 00:52:32.918747   28915 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 00:52:32.918901   28915 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 00:52:32.919034   28915 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	I0717 00:52:33.002474   28915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:33.019340   28915 status.go:257] ha-029113-m04 status: &{Name:ha-029113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr: exit status 3 (4.018136078s)

                                                
                                                
-- stdout --
	ha-029113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-029113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:52:35.472678   29014 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:52:35.472944   29014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:35.472955   29014 out.go:304] Setting ErrFile to fd 2...
	I0717 00:52:35.472961   29014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:35.473179   29014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:52:35.473354   29014 out.go:298] Setting JSON to false
	I0717 00:52:35.473389   29014 mustload.go:65] Loading cluster: ha-029113
	I0717 00:52:35.473503   29014 notify.go:220] Checking for updates...
	I0717 00:52:35.473832   29014 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:52:35.473849   29014 status.go:255] checking status of ha-029113 ...
	I0717 00:52:35.474334   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:35.474399   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:35.494457   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43003
	I0717 00:52:35.494945   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:35.495490   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:35.495515   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:35.495922   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:35.496112   29014 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:52:35.497527   29014 status.go:330] ha-029113 host status = "Running" (err=<nil>)
	I0717 00:52:35.497542   29014 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:35.497841   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:35.497883   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:35.512640   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42387
	I0717 00:52:35.513020   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:35.513465   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:35.513482   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:35.513769   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:35.513933   29014 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:52:35.516503   29014 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:35.516865   29014 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:35.516900   29014 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:35.517005   29014 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:35.517305   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:35.517355   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:35.531969   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38733
	I0717 00:52:35.532350   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:35.532788   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:35.532811   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:35.533079   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:35.533257   29014 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:52:35.533443   29014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:35.533469   29014 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:52:35.536080   29014 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:35.536492   29014 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:35.536523   29014 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:35.536708   29014 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:52:35.536877   29014 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:52:35.537021   29014 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:52:35.537194   29014 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:52:35.614198   29014 ssh_runner.go:195] Run: systemctl --version
	I0717 00:52:35.620629   29014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:35.635834   29014 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:35.635858   29014 api_server.go:166] Checking apiserver status ...
	I0717 00:52:35.635887   29014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:35.653560   29014 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0717 00:52:35.663232   29014 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:35.663278   29014 ssh_runner.go:195] Run: ls
	I0717 00:52:35.667686   29014 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:35.672072   29014 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:35.672092   29014 status.go:422] ha-029113 apiserver status = Running (err=<nil>)
	I0717 00:52:35.672101   29014 status.go:257] ha-029113 status: &{Name:ha-029113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:35.672115   29014 status.go:255] checking status of ha-029113-m02 ...
	I0717 00:52:35.672476   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:35.672515   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:35.687604   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35819
	I0717 00:52:35.688059   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:35.688482   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:35.688503   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:35.688881   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:35.689070   29014 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:52:35.690502   29014 status.go:330] ha-029113-m02 host status = "Running" (err=<nil>)
	I0717 00:52:35.690516   29014 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:35.690828   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:35.690865   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:35.705060   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44873
	I0717 00:52:35.705458   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:35.705917   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:35.705943   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:35.706207   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:35.706369   29014 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:52:35.709063   29014 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:35.709460   29014 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:35.709479   29014 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:35.709605   29014 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:35.709895   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:35.709936   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:35.725447   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0717 00:52:35.725840   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:35.726291   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:35.726310   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:35.726639   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:35.726835   29014 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:52:35.727031   29014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:35.727052   29014 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:52:35.729825   29014 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:35.730285   29014 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:35.730310   29014 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:35.730467   29014 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:52:35.730651   29014 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:52:35.730804   29014 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:52:35.730933   29014 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	W0717 00:52:35.742694   29014 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:35.742735   29014 retry.go:31] will retry after 289.191048ms: dial tcp 192.168.39.166:22: connect: no route to host
	W0717 00:52:39.102846   29014 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.166:22: connect: no route to host
	W0717 00:52:39.102928   29014 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	E0717 00:52:39.102942   29014 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:39.102949   29014 status.go:257] ha-029113-m02 status: &{Name:ha-029113-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:52:39.102964   29014 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:39.102971   29014 status.go:255] checking status of ha-029113-m03 ...
	I0717 00:52:39.103273   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:39.103311   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:39.118283   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0717 00:52:39.118726   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:39.119217   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:39.119242   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:39.119503   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:39.119689   29014 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:52:39.121096   29014 status.go:330] ha-029113-m03 host status = "Running" (err=<nil>)
	I0717 00:52:39.121115   29014 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:39.121389   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:39.121425   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:39.135851   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35865
	I0717 00:52:39.136221   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:39.136697   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:39.136726   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:39.137011   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:39.137222   29014 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:52:39.139801   29014 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:39.140259   29014 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:39.140285   29014 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:39.140407   29014 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:39.140706   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:39.140741   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:39.155125   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0717 00:52:39.155489   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:39.155912   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:39.155935   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:39.156217   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:39.156366   29014 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:52:39.156527   29014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:39.156544   29014 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:52:39.158921   29014 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:39.159274   29014 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:39.159297   29014 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:39.159429   29014 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:52:39.159592   29014 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:52:39.159733   29014 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:52:39.159906   29014 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:52:39.239336   29014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:39.255044   29014 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:39.255073   29014 api_server.go:166] Checking apiserver status ...
	I0717 00:52:39.255126   29014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:39.269469   29014 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup
	W0717 00:52:39.280280   29014 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:39.280326   29014 ssh_runner.go:195] Run: ls
	I0717 00:52:39.284876   29014 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:39.288972   29014 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:39.288997   29014 status.go:422] ha-029113-m03 apiserver status = Running (err=<nil>)
	I0717 00:52:39.289005   29014 status.go:257] ha-029113-m03 status: &{Name:ha-029113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:39.289018   29014 status.go:255] checking status of ha-029113-m04 ...
	I0717 00:52:39.289333   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:39.289365   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:39.304153   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38347
	I0717 00:52:39.304634   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:39.305202   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:39.305225   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:39.305485   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:39.305654   29014 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:52:39.307103   29014 status.go:330] ha-029113-m04 host status = "Running" (err=<nil>)
	I0717 00:52:39.307120   29014 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:39.307498   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:39.307541   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:39.322423   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34653
	I0717 00:52:39.322868   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:39.323359   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:39.323382   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:39.323652   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:39.323833   29014 main.go:141] libmachine: (ha-029113-m04) Calling .GetIP
	I0717 00:52:39.326141   29014 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:39.326623   29014 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:39.326646   29014 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:39.326809   29014 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:39.327124   29014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:39.327186   29014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:39.342818   29014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
	I0717 00:52:39.343168   29014 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:39.343541   29014 main.go:141] libmachine: Using API Version  1
	I0717 00:52:39.343554   29014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:39.343810   29014 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:39.344000   29014 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 00:52:39.344160   29014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:39.344181   29014 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 00:52:39.346913   29014 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:39.347271   29014 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:39.347294   29014 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:39.347395   29014 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 00:52:39.347506   29014 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 00:52:39.347629   29014 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 00:52:39.347850   29014 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	I0717 00:52:39.434679   29014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:39.450997   29014 status.go:257] ha-029113-m04 status: &{Name:ha-029113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr: exit status 3 (3.702740213s)

                                                
                                                
-- stdout --
	ha-029113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-029113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:52:46.081646   29131 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:52:46.081743   29131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:46.081753   29131 out.go:304] Setting ErrFile to fd 2...
	I0717 00:52:46.081759   29131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:52:46.081973   29131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:52:46.082175   29131 out.go:298] Setting JSON to false
	I0717 00:52:46.082212   29131 mustload.go:65] Loading cluster: ha-029113
	I0717 00:52:46.082340   29131 notify.go:220] Checking for updates...
	I0717 00:52:46.082785   29131 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:52:46.082804   29131 status.go:255] checking status of ha-029113 ...
	I0717 00:52:46.083304   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:46.083361   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:46.101422   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
	I0717 00:52:46.101875   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:46.102427   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:46.102472   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:46.102898   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:46.103123   29131 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:52:46.104844   29131 status.go:330] ha-029113 host status = "Running" (err=<nil>)
	I0717 00:52:46.104861   29131 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:46.105166   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:46.105207   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:46.120954   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39559
	I0717 00:52:46.121353   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:46.121799   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:46.121821   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:46.122177   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:46.122370   29131 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:52:46.125018   29131 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:46.125389   29131 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:46.125409   29131 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:46.125524   29131 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:52:46.125963   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:46.126022   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:46.140042   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0717 00:52:46.140370   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:46.140836   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:46.140855   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:46.141139   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:46.141303   29131 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:52:46.141487   29131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:46.141510   29131 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:52:46.144030   29131 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:46.144434   29131 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:52:46.144464   29131 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:52:46.144577   29131 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:52:46.144723   29131 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:52:46.144858   29131 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:52:46.144968   29131 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:52:46.222315   29131 ssh_runner.go:195] Run: systemctl --version
	I0717 00:52:46.229626   29131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:46.245244   29131 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:46.245273   29131 api_server.go:166] Checking apiserver status ...
	I0717 00:52:46.245312   29131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:46.264207   29131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0717 00:52:46.275145   29131 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:46.275191   29131 ssh_runner.go:195] Run: ls
	I0717 00:52:46.280269   29131 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:46.284168   29131 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:46.284191   29131 status.go:422] ha-029113 apiserver status = Running (err=<nil>)
	I0717 00:52:46.284199   29131 status.go:257] ha-029113 status: &{Name:ha-029113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:46.284218   29131 status.go:255] checking status of ha-029113-m02 ...
	I0717 00:52:46.284489   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:46.284518   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:46.299972   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I0717 00:52:46.300395   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:46.300857   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:46.300875   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:46.301157   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:46.301339   29131 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:52:46.302931   29131 status.go:330] ha-029113-m02 host status = "Running" (err=<nil>)
	I0717 00:52:46.302948   29131 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:46.303266   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:46.303306   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:46.317240   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41269
	I0717 00:52:46.317576   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:46.317997   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:46.318025   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:46.318317   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:46.318515   29131 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:52:46.321174   29131 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:46.321502   29131 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:46.321521   29131 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:46.321651   29131 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 00:52:46.321959   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:46.322023   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:46.336885   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38929
	I0717 00:52:46.337274   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:46.337784   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:46.337803   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:46.338091   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:46.338266   29131 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:52:46.338449   29131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:46.338472   29131 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:52:46.340958   29131 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:46.341306   29131 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:52:46.341361   29131 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:52:46.341495   29131 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:52:46.341662   29131 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:52:46.341808   29131 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:52:46.341944   29131 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	W0717 00:52:49.402810   29131 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.166:22: connect: no route to host
	W0717 00:52:49.402904   29131 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	E0717 00:52:49.402920   29131 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:49.402926   29131 status.go:257] ha-029113-m02 status: &{Name:ha-029113-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 00:52:49.402946   29131 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.166:22: connect: no route to host
	I0717 00:52:49.402953   29131 status.go:255] checking status of ha-029113-m03 ...
	I0717 00:52:49.403236   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:49.403272   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:49.417738   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I0717 00:52:49.418148   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:49.418569   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:49.418594   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:49.418956   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:49.419129   29131 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:52:49.420855   29131 status.go:330] ha-029113-m03 host status = "Running" (err=<nil>)
	I0717 00:52:49.420868   29131 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:49.421129   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:49.421178   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:49.434940   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34427
	I0717 00:52:49.435307   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:49.435737   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:49.435759   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:49.436055   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:49.436218   29131 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:52:49.439338   29131 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:49.439712   29131 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:49.439739   29131 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:49.439886   29131 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:52:49.440183   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:49.440229   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:49.454713   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
	I0717 00:52:49.455103   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:49.455494   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:49.455508   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:49.455833   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:49.456019   29131 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:52:49.456170   29131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:49.456187   29131 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:52:49.458824   29131 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:49.459266   29131 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:52:49.459286   29131 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:52:49.459490   29131 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:52:49.459615   29131 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:52:49.459762   29131 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:52:49.459916   29131 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:52:49.538500   29131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:49.555551   29131 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:52:49.555583   29131 api_server.go:166] Checking apiserver status ...
	I0717 00:52:49.555634   29131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:52:49.569541   29131 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup
	W0717 00:52:49.579071   29131 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:52:49.579124   29131 ssh_runner.go:195] Run: ls
	I0717 00:52:49.583904   29131 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:52:49.588270   29131 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:52:49.588296   29131 status.go:422] ha-029113-m03 apiserver status = Running (err=<nil>)
	I0717 00:52:49.588305   29131 status.go:257] ha-029113-m03 status: &{Name:ha-029113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:52:49.588319   29131 status.go:255] checking status of ha-029113-m04 ...
	I0717 00:52:49.588658   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:49.588701   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:49.603422   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0717 00:52:49.603821   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:49.604196   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:49.604218   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:49.604460   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:49.604681   29131 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:52:49.606160   29131 status.go:330] ha-029113-m04 host status = "Running" (err=<nil>)
	I0717 00:52:49.606174   29131 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:49.606427   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:49.606456   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:49.620459   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I0717 00:52:49.620834   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:49.621355   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:49.621379   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:49.621674   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:49.621837   29131 main.go:141] libmachine: (ha-029113-m04) Calling .GetIP
	I0717 00:52:49.624322   29131 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:49.624791   29131 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:49.624823   29131 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:49.625020   29131 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:52:49.625307   29131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:52:49.625337   29131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:52:49.639531   29131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0717 00:52:49.639870   29131 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:52:49.640288   29131 main.go:141] libmachine: Using API Version  1
	I0717 00:52:49.640310   29131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:52:49.640637   29131 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:52:49.640802   29131 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 00:52:49.640977   29131 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:52:49.641001   29131 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 00:52:49.643479   29131 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:49.643901   29131 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:52:49.643927   29131 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:52:49.644064   29131 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 00:52:49.644218   29131 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 00:52:49.644347   29131 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 00:52:49.644469   29131 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	I0717 00:52:49.730191   29131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:52:49.745455   29131 status.go:257] ha-029113-m04 status: &{Name:ha-029113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0717 00:52:58.380219   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr: exit status 7 (598.881516ms)

                                                
                                                
-- stdout --
	ha-029113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-029113-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:53:00.290505   29293 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:53:00.290810   29293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:53:00.290822   29293 out.go:304] Setting ErrFile to fd 2...
	I0717 00:53:00.290826   29293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:53:00.291064   29293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:53:00.291292   29293 out.go:298] Setting JSON to false
	I0717 00:53:00.291321   29293 mustload.go:65] Loading cluster: ha-029113
	I0717 00:53:00.291375   29293 notify.go:220] Checking for updates...
	I0717 00:53:00.291811   29293 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:53:00.291831   29293 status.go:255] checking status of ha-029113 ...
	I0717 00:53:00.292347   29293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:00.292389   29293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:00.310410   29293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35237
	I0717 00:53:00.310884   29293 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:00.311371   29293 main.go:141] libmachine: Using API Version  1
	I0717 00:53:00.311386   29293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:00.311733   29293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:00.311921   29293 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:53:00.313524   29293 status.go:330] ha-029113 host status = "Running" (err=<nil>)
	I0717 00:53:00.313554   29293 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:53:00.313834   29293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:00.313882   29293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:00.328783   29293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43347
	I0717 00:53:00.329141   29293 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:00.329598   29293 main.go:141] libmachine: Using API Version  1
	I0717 00:53:00.329616   29293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:00.329888   29293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:00.330069   29293 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:53:00.332665   29293 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:53:00.333030   29293 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:53:00.333054   29293 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:53:00.333177   29293 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:53:00.333455   29293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:00.333495   29293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:00.349166   29293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42651
	I0717 00:53:00.349593   29293 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:00.349974   29293 main.go:141] libmachine: Using API Version  1
	I0717 00:53:00.349993   29293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:00.350293   29293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:00.350451   29293 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:53:00.350654   29293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:53:00.350678   29293 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:53:00.353392   29293 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:53:00.353752   29293 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:53:00.353789   29293 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:53:00.353905   29293 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:53:00.354062   29293 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:53:00.354194   29293 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:53:00.354395   29293 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:53:00.430126   29293 ssh_runner.go:195] Run: systemctl --version
	I0717 00:53:00.437425   29293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:53:00.452195   29293 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:53:00.452220   29293 api_server.go:166] Checking apiserver status ...
	I0717 00:53:00.452252   29293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:53:00.465860   29293 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0717 00:53:00.475498   29293 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:53:00.475547   29293 ssh_runner.go:195] Run: ls
	I0717 00:53:00.480432   29293 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:53:00.484874   29293 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:53:00.484894   29293 status.go:422] ha-029113 apiserver status = Running (err=<nil>)
	I0717 00:53:00.484902   29293 status.go:257] ha-029113 status: &{Name:ha-029113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:53:00.484917   29293 status.go:255] checking status of ha-029113-m02 ...
	I0717 00:53:00.485214   29293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:00.485253   29293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:00.499940   29293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45531
	I0717 00:53:00.500347   29293 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:00.500845   29293 main.go:141] libmachine: Using API Version  1
	I0717 00:53:00.500866   29293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:00.501255   29293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:00.501407   29293 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:53:00.503086   29293 status.go:330] ha-029113-m02 host status = "Stopped" (err=<nil>)
	I0717 00:53:00.503102   29293 status.go:343] host is not running, skipping remaining checks
	I0717 00:53:00.503109   29293 status.go:257] ha-029113-m02 status: &{Name:ha-029113-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:53:00.503129   29293 status.go:255] checking status of ha-029113-m03 ...
	I0717 00:53:00.503415   29293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:00.503456   29293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:00.518381   29293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I0717 00:53:00.518831   29293 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:00.519284   29293 main.go:141] libmachine: Using API Version  1
	I0717 00:53:00.519300   29293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:00.519607   29293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:00.519781   29293 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:53:00.521464   29293 status.go:330] ha-029113-m03 host status = "Running" (err=<nil>)
	I0717 00:53:00.521481   29293 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:53:00.521781   29293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:00.521811   29293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:00.536897   29293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35973
	I0717 00:53:00.537331   29293 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:00.537824   29293 main.go:141] libmachine: Using API Version  1
	I0717 00:53:00.537846   29293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:00.538199   29293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:00.538385   29293 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:53:00.541073   29293 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:53:00.541499   29293 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:53:00.541520   29293 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:53:00.541663   29293 host.go:66] Checking if "ha-029113-m03" exists ...
	I0717 00:53:00.541978   29293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:00.542020   29293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:00.556148   29293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40879
	I0717 00:53:00.556530   29293 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:00.556981   29293 main.go:141] libmachine: Using API Version  1
	I0717 00:53:00.557010   29293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:00.557344   29293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:00.557562   29293 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:53:00.557706   29293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:53:00.557722   29293 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:53:00.560563   29293 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:53:00.561008   29293 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:53:00.561035   29293 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:53:00.561176   29293 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:53:00.561345   29293 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:53:00.561504   29293 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:53:00.561644   29293 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:53:00.643134   29293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:53:00.657130   29293 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 00:53:00.657157   29293 api_server.go:166] Checking apiserver status ...
	I0717 00:53:00.657187   29293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:53:00.671057   29293 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup
	W0717 00:53:00.682268   29293 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1575/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:53:00.682326   29293 ssh_runner.go:195] Run: ls
	I0717 00:53:00.687150   29293 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 00:53:00.691114   29293 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 00:53:00.691135   29293 status.go:422] ha-029113-m03 apiserver status = Running (err=<nil>)
	I0717 00:53:00.691143   29293 status.go:257] ha-029113-m03 status: &{Name:ha-029113-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:53:00.691156   29293 status.go:255] checking status of ha-029113-m04 ...
	I0717 00:53:00.691422   29293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:00.691457   29293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:00.705904   29293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40913
	I0717 00:53:00.706329   29293 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:00.706860   29293 main.go:141] libmachine: Using API Version  1
	I0717 00:53:00.706880   29293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:00.707161   29293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:00.707357   29293 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:53:00.708787   29293 status.go:330] ha-029113-m04 host status = "Running" (err=<nil>)
	I0717 00:53:00.708800   29293 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:53:00.709061   29293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:00.709094   29293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:00.723520   29293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41615
	I0717 00:53:00.723958   29293 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:00.724365   29293 main.go:141] libmachine: Using API Version  1
	I0717 00:53:00.724383   29293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:00.724689   29293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:00.724850   29293 main.go:141] libmachine: (ha-029113-m04) Calling .GetIP
	I0717 00:53:00.727689   29293 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:53:00.728134   29293 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:53:00.728166   29293 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:53:00.728286   29293 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 00:53:00.728681   29293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:00.728720   29293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:00.743021   29293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0717 00:53:00.743377   29293 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:00.743774   29293 main.go:141] libmachine: Using API Version  1
	I0717 00:53:00.743793   29293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:00.744101   29293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:00.744277   29293 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 00:53:00.744425   29293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:53:00.744441   29293 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 00:53:00.746919   29293 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:53:00.747289   29293 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:53:00.747322   29293 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:53:00.747480   29293 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 00:53:00.747623   29293 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 00:53:00.747790   29293 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 00:53:00.747929   29293 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	I0717 00:53:00.830165   29293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:53:00.846573   29293 status.go:257] ha-029113-m04 status: &{Name:ha-029113-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-029113 -n ha-029113
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-029113 logs -n 25: (1.38920291s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113:/home/docker/cp-test_ha-029113-m03_ha-029113.txt                      |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113 sudo cat                                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m03_ha-029113.txt                                |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m02:/home/docker/cp-test_ha-029113-m03_ha-029113-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m02 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m03_ha-029113-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04:/home/docker/cp-test_ha-029113-m03_ha-029113-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m04 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m03_ha-029113-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp testdata/cp-test.txt                                               | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile695400083/001/cp-test_ha-029113-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113:/home/docker/cp-test_ha-029113-m04_ha-029113.txt                      |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113 sudo cat                                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113.txt                                |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m02:/home/docker/cp-test_ha-029113-m04_ha-029113-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m02 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03:/home/docker/cp-test_ha-029113-m04_ha-029113-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m03 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-029113 node stop m02 -v=7                                                    | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-029113 node start m02 -v=7                                                   | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:43:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:43:29.629545   23443 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:43:29.629978   23443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:43:29.629990   23443 out.go:304] Setting ErrFile to fd 2...
	I0717 00:43:29.629995   23443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:43:29.630222   23443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:43:29.630815   23443 out.go:298] Setting JSON to false
	I0717 00:43:29.631669   23443 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1552,"bootTime":1721175458,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:43:29.631721   23443 start.go:139] virtualization: kvm guest
	I0717 00:43:29.633685   23443 out.go:177] * [ha-029113] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:43:29.634964   23443 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:43:29.635030   23443 notify.go:220] Checking for updates...
	I0717 00:43:29.637312   23443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:43:29.638523   23443 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:43:29.639779   23443 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:43:29.640930   23443 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:43:29.642067   23443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:43:29.643437   23443 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:43:29.676662   23443 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 00:43:29.677927   23443 start.go:297] selected driver: kvm2
	I0717 00:43:29.677944   23443 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:43:29.677955   23443 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:43:29.678643   23443 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:43:29.678723   23443 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:43:29.692865   23443 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:43:29.692924   23443 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:43:29.693150   23443 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:43:29.693214   23443 cni.go:84] Creating CNI manager for ""
	I0717 00:43:29.693229   23443 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 00:43:29.693237   23443 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:43:29.693307   23443 start.go:340] cluster config:
	{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0717 00:43:29.693410   23443 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:43:29.695024   23443 out.go:177] * Starting "ha-029113" primary control-plane node in "ha-029113" cluster
	I0717 00:43:29.696289   23443 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:43:29.696321   23443 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:43:29.696333   23443 cache.go:56] Caching tarball of preloaded images
	I0717 00:43:29.696403   23443 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:43:29.696417   23443 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:43:29.696734   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:43:29.696756   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json: {Name:mk1c70be09fae3a15c6dd239577cad4b9c0c123e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:29.696900   23443 start.go:360] acquireMachinesLock for ha-029113: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:43:29.696933   23443 start.go:364] duration metric: took 18.392µs to acquireMachinesLock for "ha-029113"
	I0717 00:43:29.696955   23443 start.go:93] Provisioning new machine with config: &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:43:29.697014   23443 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 00:43:29.699183   23443 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:43:29.699293   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:43:29.699332   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:43:29.712954   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I0717 00:43:29.713304   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:43:29.713743   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:43:29.713764   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:43:29.714016   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:43:29.714197   23443 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:43:29.714312   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:29.714431   23443 start.go:159] libmachine.API.Create for "ha-029113" (driver="kvm2")
	I0717 00:43:29.714457   23443 client.go:168] LocalClient.Create starting
	I0717 00:43:29.714479   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 00:43:29.714505   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:43:29.714524   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:43:29.714602   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 00:43:29.714630   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:43:29.714644   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:43:29.714660   23443 main.go:141] libmachine: Running pre-create checks...
	I0717 00:43:29.714668   23443 main.go:141] libmachine: (ha-029113) Calling .PreCreateCheck
	I0717 00:43:29.715013   23443 main.go:141] libmachine: (ha-029113) Calling .GetConfigRaw
	I0717 00:43:29.715344   23443 main.go:141] libmachine: Creating machine...
	I0717 00:43:29.715356   23443 main.go:141] libmachine: (ha-029113) Calling .Create
	I0717 00:43:29.715468   23443 main.go:141] libmachine: (ha-029113) Creating KVM machine...
	I0717 00:43:29.716586   23443 main.go:141] libmachine: (ha-029113) DBG | found existing default KVM network
	I0717 00:43:29.717158   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:29.717044   23466 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0717 00:43:29.717184   23443 main.go:141] libmachine: (ha-029113) DBG | created network xml: 
	I0717 00:43:29.717196   23443 main.go:141] libmachine: (ha-029113) DBG | <network>
	I0717 00:43:29.717207   23443 main.go:141] libmachine: (ha-029113) DBG |   <name>mk-ha-029113</name>
	I0717 00:43:29.717215   23443 main.go:141] libmachine: (ha-029113) DBG |   <dns enable='no'/>
	I0717 00:43:29.717225   23443 main.go:141] libmachine: (ha-029113) DBG |   
	I0717 00:43:29.717236   23443 main.go:141] libmachine: (ha-029113) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 00:43:29.717244   23443 main.go:141] libmachine: (ha-029113) DBG |     <dhcp>
	I0717 00:43:29.717251   23443 main.go:141] libmachine: (ha-029113) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 00:43:29.717258   23443 main.go:141] libmachine: (ha-029113) DBG |     </dhcp>
	I0717 00:43:29.717283   23443 main.go:141] libmachine: (ha-029113) DBG |   </ip>
	I0717 00:43:29.717305   23443 main.go:141] libmachine: (ha-029113) DBG |   
	I0717 00:43:29.717316   23443 main.go:141] libmachine: (ha-029113) DBG | </network>
	I0717 00:43:29.717326   23443 main.go:141] libmachine: (ha-029113) DBG | 
	I0717 00:43:29.722037   23443 main.go:141] libmachine: (ha-029113) DBG | trying to create private KVM network mk-ha-029113 192.168.39.0/24...
	I0717 00:43:29.783430   23443 main.go:141] libmachine: (ha-029113) DBG | private KVM network mk-ha-029113 192.168.39.0/24 created
	I0717 00:43:29.783474   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:29.783394   23466 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:43:29.783485   23443 main.go:141] libmachine: (ha-029113) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113 ...
	I0717 00:43:29.783502   23443 main.go:141] libmachine: (ha-029113) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 00:43:29.783528   23443 main.go:141] libmachine: (ha-029113) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 00:43:30.013619   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:30.013452   23466 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa...
	I0717 00:43:30.283548   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:30.283435   23466 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/ha-029113.rawdisk...
	I0717 00:43:30.283588   23443 main.go:141] libmachine: (ha-029113) DBG | Writing magic tar header
	I0717 00:43:30.283610   23443 main.go:141] libmachine: (ha-029113) DBG | Writing SSH key tar header
	I0717 00:43:30.283620   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:30.283558   23466 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113 ...
	I0717 00:43:30.283736   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113
	I0717 00:43:30.283771   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 00:43:30.283787   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113 (perms=drwx------)
	I0717 00:43:30.283808   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:43:30.283830   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 00:43:30.283852   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:43:30.283870   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 00:43:30.283884   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 00:43:30.283900   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:43:30.283913   23443 main.go:141] libmachine: (ha-029113) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:43:30.283926   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:43:30.283938   23443 main.go:141] libmachine: (ha-029113) Creating domain...
	I0717 00:43:30.283947   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:43:30.283962   23443 main.go:141] libmachine: (ha-029113) DBG | Checking permissions on dir: /home
	I0717 00:43:30.283972   23443 main.go:141] libmachine: (ha-029113) DBG | Skipping /home - not owner
	I0717 00:43:30.284856   23443 main.go:141] libmachine: (ha-029113) define libvirt domain using xml: 
	I0717 00:43:30.284873   23443 main.go:141] libmachine: (ha-029113) <domain type='kvm'>
	I0717 00:43:30.284880   23443 main.go:141] libmachine: (ha-029113)   <name>ha-029113</name>
	I0717 00:43:30.284887   23443 main.go:141] libmachine: (ha-029113)   <memory unit='MiB'>2200</memory>
	I0717 00:43:30.284897   23443 main.go:141] libmachine: (ha-029113)   <vcpu>2</vcpu>
	I0717 00:43:30.284907   23443 main.go:141] libmachine: (ha-029113)   <features>
	I0717 00:43:30.284914   23443 main.go:141] libmachine: (ha-029113)     <acpi/>
	I0717 00:43:30.284923   23443 main.go:141] libmachine: (ha-029113)     <apic/>
	I0717 00:43:30.284928   23443 main.go:141] libmachine: (ha-029113)     <pae/>
	I0717 00:43:30.284937   23443 main.go:141] libmachine: (ha-029113)     
	I0717 00:43:30.284945   23443 main.go:141] libmachine: (ha-029113)   </features>
	I0717 00:43:30.284949   23443 main.go:141] libmachine: (ha-029113)   <cpu mode='host-passthrough'>
	I0717 00:43:30.284956   23443 main.go:141] libmachine: (ha-029113)   
	I0717 00:43:30.284963   23443 main.go:141] libmachine: (ha-029113)   </cpu>
	I0717 00:43:30.284989   23443 main.go:141] libmachine: (ha-029113)   <os>
	I0717 00:43:30.285013   23443 main.go:141] libmachine: (ha-029113)     <type>hvm</type>
	I0717 00:43:30.285024   23443 main.go:141] libmachine: (ha-029113)     <boot dev='cdrom'/>
	I0717 00:43:30.285037   23443 main.go:141] libmachine: (ha-029113)     <boot dev='hd'/>
	I0717 00:43:30.285063   23443 main.go:141] libmachine: (ha-029113)     <bootmenu enable='no'/>
	I0717 00:43:30.285082   23443 main.go:141] libmachine: (ha-029113)   </os>
	I0717 00:43:30.285094   23443 main.go:141] libmachine: (ha-029113)   <devices>
	I0717 00:43:30.285106   23443 main.go:141] libmachine: (ha-029113)     <disk type='file' device='cdrom'>
	I0717 00:43:30.285123   23443 main.go:141] libmachine: (ha-029113)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/boot2docker.iso'/>
	I0717 00:43:30.285134   23443 main.go:141] libmachine: (ha-029113)       <target dev='hdc' bus='scsi'/>
	I0717 00:43:30.285146   23443 main.go:141] libmachine: (ha-029113)       <readonly/>
	I0717 00:43:30.285160   23443 main.go:141] libmachine: (ha-029113)     </disk>
	I0717 00:43:30.285173   23443 main.go:141] libmachine: (ha-029113)     <disk type='file' device='disk'>
	I0717 00:43:30.285186   23443 main.go:141] libmachine: (ha-029113)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:43:30.285209   23443 main.go:141] libmachine: (ha-029113)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/ha-029113.rawdisk'/>
	I0717 00:43:30.285219   23443 main.go:141] libmachine: (ha-029113)       <target dev='hda' bus='virtio'/>
	I0717 00:43:30.285236   23443 main.go:141] libmachine: (ha-029113)     </disk>
	I0717 00:43:30.285252   23443 main.go:141] libmachine: (ha-029113)     <interface type='network'>
	I0717 00:43:30.285265   23443 main.go:141] libmachine: (ha-029113)       <source network='mk-ha-029113'/>
	I0717 00:43:30.285275   23443 main.go:141] libmachine: (ha-029113)       <model type='virtio'/>
	I0717 00:43:30.285284   23443 main.go:141] libmachine: (ha-029113)     </interface>
	I0717 00:43:30.285289   23443 main.go:141] libmachine: (ha-029113)     <interface type='network'>
	I0717 00:43:30.285294   23443 main.go:141] libmachine: (ha-029113)       <source network='default'/>
	I0717 00:43:30.285303   23443 main.go:141] libmachine: (ha-029113)       <model type='virtio'/>
	I0717 00:43:30.285315   23443 main.go:141] libmachine: (ha-029113)     </interface>
	I0717 00:43:30.285328   23443 main.go:141] libmachine: (ha-029113)     <serial type='pty'>
	I0717 00:43:30.285339   23443 main.go:141] libmachine: (ha-029113)       <target port='0'/>
	I0717 00:43:30.285349   23443 main.go:141] libmachine: (ha-029113)     </serial>
	I0717 00:43:30.285361   23443 main.go:141] libmachine: (ha-029113)     <console type='pty'>
	I0717 00:43:30.285371   23443 main.go:141] libmachine: (ha-029113)       <target type='serial' port='0'/>
	I0717 00:43:30.285382   23443 main.go:141] libmachine: (ha-029113)     </console>
	I0717 00:43:30.285392   23443 main.go:141] libmachine: (ha-029113)     <rng model='virtio'>
	I0717 00:43:30.285408   23443 main.go:141] libmachine: (ha-029113)       <backend model='random'>/dev/random</backend>
	I0717 00:43:30.285420   23443 main.go:141] libmachine: (ha-029113)     </rng>
	I0717 00:43:30.285428   23443 main.go:141] libmachine: (ha-029113)     
	I0717 00:43:30.285437   23443 main.go:141] libmachine: (ha-029113)     
	I0717 00:43:30.285445   23443 main.go:141] libmachine: (ha-029113)   </devices>
	I0717 00:43:30.285454   23443 main.go:141] libmachine: (ha-029113) </domain>
	I0717 00:43:30.285463   23443 main.go:141] libmachine: (ha-029113) 
	I0717 00:43:30.289368   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:65:21:b3 in network default
	I0717 00:43:30.289864   23443 main.go:141] libmachine: (ha-029113) Ensuring networks are active...
	I0717 00:43:30.289894   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:30.290659   23443 main.go:141] libmachine: (ha-029113) Ensuring network default is active
	I0717 00:43:30.290955   23443 main.go:141] libmachine: (ha-029113) Ensuring network mk-ha-029113 is active
	I0717 00:43:30.291367   23443 main.go:141] libmachine: (ha-029113) Getting domain xml...
	I0717 00:43:30.291994   23443 main.go:141] libmachine: (ha-029113) Creating domain...
	I0717 00:43:31.452349   23443 main.go:141] libmachine: (ha-029113) Waiting to get IP...
	I0717 00:43:31.453202   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:31.453570   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:31.453615   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:31.453562   23466 retry.go:31] will retry after 251.741638ms: waiting for machine to come up
	I0717 00:43:31.706967   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:31.707410   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:31.707440   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:31.707366   23466 retry.go:31] will retry after 295.804163ms: waiting for machine to come up
	I0717 00:43:32.004697   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:32.005111   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:32.005146   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:32.005081   23466 retry.go:31] will retry after 353.624289ms: waiting for machine to come up
	I0717 00:43:32.360538   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:32.360981   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:32.361019   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:32.360949   23466 retry.go:31] will retry after 608.253018ms: waiting for machine to come up
	I0717 00:43:32.970606   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:32.971060   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:32.971080   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:32.971017   23466 retry.go:31] will retry after 543.533236ms: waiting for machine to come up
	I0717 00:43:33.515677   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:33.516113   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:33.516135   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:33.516069   23466 retry.go:31] will retry after 696.415589ms: waiting for machine to come up
	I0717 00:43:34.213929   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:34.214271   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:34.214300   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:34.214233   23466 retry.go:31] will retry after 1.080255731s: waiting for machine to come up
	I0717 00:43:35.295986   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:35.296445   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:35.296474   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:35.296400   23466 retry.go:31] will retry after 1.222285687s: waiting for machine to come up
	I0717 00:43:36.520660   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:36.520986   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:36.521007   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:36.520942   23466 retry.go:31] will retry after 1.580634952s: waiting for machine to come up
	I0717 00:43:38.103829   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:38.104184   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:38.104211   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:38.104144   23466 retry.go:31] will retry after 1.42041846s: waiting for machine to come up
	I0717 00:43:39.526530   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:39.526916   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:39.526938   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:39.526872   23466 retry.go:31] will retry after 2.750366058s: waiting for machine to come up
	I0717 00:43:42.280613   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:42.281014   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:42.281036   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:42.280965   23466 retry.go:31] will retry after 2.193861337s: waiting for machine to come up
	I0717 00:43:44.477108   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:44.477528   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find current IP address of domain ha-029113 in network mk-ha-029113
	I0717 00:43:44.477556   23443 main.go:141] libmachine: (ha-029113) DBG | I0717 00:43:44.477486   23466 retry.go:31] will retry after 4.450517174s: waiting for machine to come up
	I0717 00:43:48.932343   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:48.932710   23443 main.go:141] libmachine: (ha-029113) Found IP for machine: 192.168.39.95
	I0717 00:43:48.932738   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has current primary IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:48.932748   23443 main.go:141] libmachine: (ha-029113) Reserving static IP address...
	I0717 00:43:48.933048   23443 main.go:141] libmachine: (ha-029113) DBG | unable to find host DHCP lease matching {name: "ha-029113", mac: "52:54:00:04:d5:10", ip: "192.168.39.95"} in network mk-ha-029113
	I0717 00:43:49.000960   23443 main.go:141] libmachine: (ha-029113) DBG | Getting to WaitForSSH function...
	I0717 00:43:49.000990   23443 main.go:141] libmachine: (ha-029113) Reserved static IP address: 192.168.39.95
	I0717 00:43:49.001004   23443 main.go:141] libmachine: (ha-029113) Waiting for SSH to be available...
	I0717 00:43:49.003222   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.003581   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.003610   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.003711   23443 main.go:141] libmachine: (ha-029113) DBG | Using SSH client type: external
	I0717 00:43:49.003738   23443 main.go:141] libmachine: (ha-029113) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa (-rw-------)
	I0717 00:43:49.003801   23443 main.go:141] libmachine: (ha-029113) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:43:49.003827   23443 main.go:141] libmachine: (ha-029113) DBG | About to run SSH command:
	I0717 00:43:49.003841   23443 main.go:141] libmachine: (ha-029113) DBG | exit 0
	I0717 00:43:49.122787   23443 main.go:141] libmachine: (ha-029113) DBG | SSH cmd err, output: <nil>: 
	I0717 00:43:49.123082   23443 main.go:141] libmachine: (ha-029113) KVM machine creation complete!
	I0717 00:43:49.123382   23443 main.go:141] libmachine: (ha-029113) Calling .GetConfigRaw
	I0717 00:43:49.123867   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:49.124050   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:49.124224   23443 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:43:49.124237   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:43:49.125436   23443 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:43:49.125451   23443 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:43:49.125458   23443 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:43:49.125466   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.127516   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.127838   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.127863   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.128020   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.128182   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.128327   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.128442   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.128595   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:49.128801   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:49.128813   23443 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:43:49.225819   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:43:49.225846   23443 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:43:49.225853   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.228489   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.228847   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.228884   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.228985   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.229168   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.229332   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.229488   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.229640   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:49.229857   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:49.229869   23443 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:43:49.327057   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:43:49.327117   23443 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:43:49.327124   23443 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:43:49.327131   23443 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:43:49.327402   23443 buildroot.go:166] provisioning hostname "ha-029113"
	I0717 00:43:49.327424   23443 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:43:49.327598   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.330014   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.330293   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.330316   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.330473   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.330644   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.330799   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.330893   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.331039   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:49.331199   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:49.331210   23443 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-029113 && echo "ha-029113" | sudo tee /etc/hostname
	I0717 00:43:49.440387   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-029113
	
	I0717 00:43:49.440417   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.443377   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.443768   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.443795   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.443922   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.444082   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.444246   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.444378   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.444538   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:49.444761   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:49.444778   23443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-029113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-029113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-029113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:43:49.551255   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:43:49.551292   23443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 00:43:49.551332   23443 buildroot.go:174] setting up certificates
	I0717 00:43:49.551347   23443 provision.go:84] configureAuth start
	I0717 00:43:49.551363   23443 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:43:49.551614   23443 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:43:49.553979   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.554336   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.554356   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.554521   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.556388   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.556655   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.556672   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.556784   23443 provision.go:143] copyHostCerts
	I0717 00:43:49.556822   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:43:49.556868   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 00:43:49.556885   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:43:49.556962   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 00:43:49.557078   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:43:49.557104   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 00:43:49.557110   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:43:49.557149   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 00:43:49.557222   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:43:49.557246   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 00:43:49.557254   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:43:49.557284   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 00:43:49.557360   23443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.ha-029113 san=[127.0.0.1 192.168.39.95 ha-029113 localhost minikube]
	I0717 00:43:49.682206   23443 provision.go:177] copyRemoteCerts
	I0717 00:43:49.682256   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:43:49.682277   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.684463   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.684771   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.684791   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.684987   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.685185   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.685330   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.685462   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:43:49.764376   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:43:49.764434   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:43:49.788963   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:43:49.789032   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 00:43:49.811677   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:43:49.811767   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 00:43:49.834236   23443 provision.go:87] duration metric: took 282.873795ms to configureAuth
	I0717 00:43:49.834259   23443 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:43:49.834405   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:43:49.834466   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:49.836925   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.837234   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:49.837270   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:49.837433   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:49.837598   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.837767   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:49.837874   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:49.838017   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:49.838176   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:49.838193   23443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:43:50.091148   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:43:50.091195   23443 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:43:50.091205   23443 main.go:141] libmachine: (ha-029113) Calling .GetURL
	I0717 00:43:50.092384   23443 main.go:141] libmachine: (ha-029113) DBG | Using libvirt version 6000000
	I0717 00:43:50.094200   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.094518   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.094567   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.094723   23443 main.go:141] libmachine: Docker is up and running!
	I0717 00:43:50.094736   23443 main.go:141] libmachine: Reticulating splines...
	I0717 00:43:50.094743   23443 client.go:171] duration metric: took 20.380279073s to LocalClient.Create
	I0717 00:43:50.094772   23443 start.go:167] duration metric: took 20.380340167s to libmachine.API.Create "ha-029113"
	I0717 00:43:50.094784   23443 start.go:293] postStartSetup for "ha-029113" (driver="kvm2")
	I0717 00:43:50.094798   23443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:43:50.094817   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:50.095041   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:43:50.095063   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:50.096900   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.097192   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.097217   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.097334   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:50.097500   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:50.097665   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:50.097781   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:43:50.176944   23443 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:43:50.181306   23443 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:43:50.181335   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 00:43:50.181410   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 00:43:50.181479   23443 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 00:43:50.181488   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /etc/ssl/certs/112592.pem
	I0717 00:43:50.181568   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:43:50.191227   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:43:50.220997   23443 start.go:296] duration metric: took 126.177076ms for postStartSetup
	I0717 00:43:50.221057   23443 main.go:141] libmachine: (ha-029113) Calling .GetConfigRaw
	I0717 00:43:50.221589   23443 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:43:50.223904   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.224228   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.224247   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.224461   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:43:50.224644   23443 start.go:128] duration metric: took 20.527614062s to createHost
	I0717 00:43:50.224666   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:50.226756   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.227035   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.227065   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.227197   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:50.227359   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:50.227510   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:50.227616   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:50.227762   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:43:50.227915   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:43:50.227926   23443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:43:50.323134   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177030.294684965
	
	I0717 00:43:50.323157   23443 fix.go:216] guest clock: 1721177030.294684965
	I0717 00:43:50.323164   23443 fix.go:229] Guest: 2024-07-17 00:43:50.294684965 +0000 UTC Remote: 2024-07-17 00:43:50.22465597 +0000 UTC m=+20.626931124 (delta=70.028995ms)
	I0717 00:43:50.323181   23443 fix.go:200] guest clock delta is within tolerance: 70.028995ms
	I0717 00:43:50.323185   23443 start.go:83] releasing machines lock for "ha-029113", held for 20.626243015s
	I0717 00:43:50.323202   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:50.323438   23443 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:43:50.325943   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.326247   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.326270   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.326424   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:50.326971   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:50.327114   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:43:50.327206   23443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:43:50.327251   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:50.327310   23443 ssh_runner.go:195] Run: cat /version.json
	I0717 00:43:50.327329   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:43:50.329532   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.329612   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.329868   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.329892   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.329921   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:50.329935   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:50.330012   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:50.330194   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:50.330223   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:43:50.330360   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:43:50.330362   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:50.330566   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:43:50.330568   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:43:50.330691   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:43:50.403490   23443 ssh_runner.go:195] Run: systemctl --version
	I0717 00:43:50.432889   23443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:43:50.591008   23443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:43:50.597593   23443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:43:50.597675   23443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:43:50.613254   23443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:43:50.613277   23443 start.go:495] detecting cgroup driver to use...
	I0717 00:43:50.613329   23443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:43:50.629634   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:43:50.642915   23443 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:43:50.642960   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:43:50.655986   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:43:50.669044   23443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:43:50.787054   23443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:43:50.929244   23443 docker.go:233] disabling docker service ...
	I0717 00:43:50.929296   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:43:50.943183   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:43:50.956184   23443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:43:51.091625   23443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:43:51.205309   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:43:51.220248   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:43:51.239678   23443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:43:51.239741   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.251038   23443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:43:51.251098   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.262896   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.273907   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.284540   23443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:43:51.295275   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.307215   23443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.325827   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:43:51.337698   23443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:43:51.348529   23443 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:43:51.348583   23443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:43:51.363155   23443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:43:51.374462   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:43:51.494995   23443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:43:51.627737   23443 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:43:51.627820   23443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:43:51.632591   23443 start.go:563] Will wait 60s for crictl version
	I0717 00:43:51.632647   23443 ssh_runner.go:195] Run: which crictl
	I0717 00:43:51.636364   23443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:43:51.679301   23443 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:43:51.679382   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:43:51.707621   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:43:51.738137   23443 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:43:51.739528   23443 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:43:51.742125   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:51.742461   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:43:51.742485   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:43:51.742721   23443 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:43:51.746846   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:43:51.759830   23443 kubeadm.go:883] updating cluster {Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:43:51.759923   23443 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:43:51.759959   23443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:43:51.791556   23443 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 00:43:51.791627   23443 ssh_runner.go:195] Run: which lz4
	I0717 00:43:51.795469   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 00:43:51.795576   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 00:43:51.799673   23443 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 00:43:51.799699   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 00:43:53.192514   23443 crio.go:462] duration metric: took 1.396967984s to copy over tarball
	I0717 00:43:53.192594   23443 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 00:43:55.283467   23443 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.09084036s)
	I0717 00:43:55.283502   23443 crio.go:469] duration metric: took 2.090961191s to extract the tarball
	I0717 00:43:55.283512   23443 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 00:43:55.320520   23443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:43:55.362789   23443 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:43:55.362814   23443 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:43:55.362822   23443 kubeadm.go:934] updating node { 192.168.39.95 8443 v1.30.2 crio true true} ...
	I0717 00:43:55.362950   23443 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-029113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:43:55.363039   23443 ssh_runner.go:195] Run: crio config
	I0717 00:43:55.413791   23443 cni.go:84] Creating CNI manager for ""
	I0717 00:43:55.413813   23443 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 00:43:55.413824   23443 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:43:55.413851   23443 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-029113 NodeName:ha-029113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:43:55.414008   23443 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-029113"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:43:55.414037   23443 kube-vip.go:115] generating kube-vip config ...
	I0717 00:43:55.414091   23443 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:43:55.430120   23443 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:43:55.430234   23443 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:43:55.430303   23443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:43:55.439877   23443 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:43:55.439931   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 00:43:55.448975   23443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0717 00:43:55.464948   23443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:43:55.480422   23443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0717 00:43:55.496473   23443 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0717 00:43:55.513844   23443 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:43:55.518038   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:43:55.530981   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:43:55.656193   23443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:43:55.672985   23443 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113 for IP: 192.168.39.95
	I0717 00:43:55.673006   23443 certs.go:194] generating shared ca certs ...
	I0717 00:43:55.673026   23443 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:55.673195   23443 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 00:43:55.673247   23443 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 00:43:55.673261   23443 certs.go:256] generating profile certs ...
	I0717 00:43:55.673318   23443 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key
	I0717 00:43:55.673336   23443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt with IP's: []
	I0717 00:43:55.804202   23443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt ...
	I0717 00:43:55.804230   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt: {Name:mkaad8f228a6769c319165d4356d6d5b16d56f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:55.804396   23443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key ...
	I0717 00:43:55.804410   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key: {Name:mkb1b523099783e05b4d547548032d6d46313696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:55.804508   23443 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.8898c4fa
	I0717 00:43:55.804526   23443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.8898c4fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.95 192.168.39.254]
	I0717 00:43:56.060272   23443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.8898c4fa ...
	I0717 00:43:56.060300   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.8898c4fa: {Name:mk1cada8fdbc736c986089a0c0ad728ff94f64e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:56.060469   23443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.8898c4fa ...
	I0717 00:43:56.060490   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.8898c4fa: {Name:mk99ce3174b978eb325285f1a4d20c9add85d0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:56.060579   23443 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.8898c4fa -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt
	I0717 00:43:56.060663   23443 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.8898c4fa -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key
	I0717 00:43:56.060714   23443 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key
	I0717 00:43:56.060730   23443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt with IP's: []
	I0717 00:43:56.226632   23443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt ...
	I0717 00:43:56.226678   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt: {Name:mkd34e7f758ab0a3926b993b1f8abc99e6f69e10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:56.226822   23443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key ...
	I0717 00:43:56.226833   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key: {Name:mke656fc7c4f8fcbd8e910a166a066c5be919b98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:43:56.226899   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:43:56.226926   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:43:56.226946   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:43:56.226960   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:43:56.226970   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:43:56.226984   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:43:56.226996   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:43:56.227006   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:43:56.227048   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 00:43:56.227079   23443 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 00:43:56.227088   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:43:56.227108   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:43:56.227130   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:43:56.227150   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 00:43:56.227185   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:43:56.227209   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem -> /usr/share/ca-certificates/11259.pem
	I0717 00:43:56.227223   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /usr/share/ca-certificates/112592.pem
	I0717 00:43:56.227235   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:43:56.227757   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:43:56.253230   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:43:56.276650   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:43:56.302929   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:43:56.328994   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 00:43:56.352938   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:43:56.375986   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:43:56.399130   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:43:56.424959   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 00:43:56.457813   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 00:43:56.483719   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:43:56.510891   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:43:56.527362   23443 ssh_runner.go:195] Run: openssl version
	I0717 00:43:56.533004   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 00:43:56.543902   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 00:43:56.548734   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 00:43:56.548782   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 00:43:56.554857   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 00:43:56.566468   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 00:43:56.578154   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 00:43:56.582940   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 00:43:56.582997   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 00:43:56.588670   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:43:56.599501   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:43:56.609938   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:43:56.614241   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:43:56.614290   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:43:56.619757   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:43:56.630726   23443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:43:56.635248   23443 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:43:56.635308   23443 kubeadm.go:392] StartCluster: {Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:43:56.635418   23443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:43:56.635488   23443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:43:56.681831   23443 cri.go:89] found id: ""
	I0717 00:43:56.681897   23443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:43:56.692128   23443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:43:56.704885   23443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:43:56.716080   23443 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:43:56.716100   23443 kubeadm.go:157] found existing configuration files:
	
	I0717 00:43:56.716147   23443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:43:56.725477   23443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:43:56.725541   23443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:43:56.735884   23443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:43:56.745410   23443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:43:56.745460   23443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:43:56.754798   23443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:43:56.763615   23443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:43:56.763668   23443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:43:56.772871   23443 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:43:56.781620   23443 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:43:56.781668   23443 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:43:56.790667   23443 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 00:43:56.890978   23443 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:43:56.891081   23443 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:43:57.019005   23443 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:43:57.019160   23443 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:43:57.019320   23443 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:43:57.248022   23443 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:43:57.347304   23443 out.go:204]   - Generating certificates and keys ...
	I0717 00:43:57.347402   23443 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:43:57.347495   23443 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:43:57.347565   23443 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:43:57.454502   23443 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:43:57.512789   23443 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:43:57.603687   23443 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:43:57.721136   23443 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:43:57.721275   23443 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-029113 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	I0717 00:43:57.867674   23443 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:43:57.867819   23443 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-029113 localhost] and IPs [192.168.39.95 127.0.0.1 ::1]
	I0717 00:43:58.019368   23443 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:43:58.215990   23443 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:43:58.306221   23443 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:43:58.306316   23443 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:43:58.385599   23443 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:43:58.716664   23443 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:43:59.138773   23443 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:43:59.443407   23443 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:43:59.523429   23443 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:43:59.523961   23443 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:43:59.526339   23443 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:43:59.529334   23443 out.go:204]   - Booting up control plane ...
	I0717 00:43:59.529447   23443 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:43:59.529556   23443 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:43:59.529647   23443 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:43:59.544877   23443 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:43:59.545872   23443 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:43:59.545934   23443 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:43:59.667902   23443 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:43:59.668006   23443 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:44:00.669499   23443 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002393794s
	I0717 00:44:00.669627   23443 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:44:06.460981   23443 kubeadm.go:310] [api-check] The API server is healthy after 5.795437316s
	I0717 00:44:06.474308   23443 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:44:06.491501   23443 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:44:07.015909   23443 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:44:07.016365   23443 kubeadm.go:310] [mark-control-plane] Marking the node ha-029113 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:44:07.030474   23443 kubeadm.go:310] [bootstrap-token] Using token: obton2.k2oggi6v8c13i9u1
	I0717 00:44:07.032016   23443 out.go:204]   - Configuring RBAC rules ...
	I0717 00:44:07.032136   23443 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:44:07.047364   23443 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:44:07.059970   23443 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:44:07.063616   23443 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:44:07.066683   23443 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:44:07.069722   23443 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:44:07.085350   23443 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:44:07.328276   23443 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:44:07.869315   23443 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:44:07.870449   23443 kubeadm.go:310] 
	I0717 00:44:07.870530   23443 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:44:07.870567   23443 kubeadm.go:310] 
	I0717 00:44:07.870649   23443 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:44:07.870661   23443 kubeadm.go:310] 
	I0717 00:44:07.870694   23443 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:44:07.870771   23443 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:44:07.870857   23443 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:44:07.870878   23443 kubeadm.go:310] 
	I0717 00:44:07.870955   23443 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:44:07.870965   23443 kubeadm.go:310] 
	I0717 00:44:07.871037   23443 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:44:07.871046   23443 kubeadm.go:310] 
	I0717 00:44:07.871101   23443 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:44:07.871219   23443 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:44:07.871323   23443 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:44:07.871332   23443 kubeadm.go:310] 
	I0717 00:44:07.871433   23443 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:44:07.871546   23443 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:44:07.871558   23443 kubeadm.go:310] 
	I0717 00:44:07.871699   23443 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token obton2.k2oggi6v8c13i9u1 \
	I0717 00:44:07.871850   23443 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 \
	I0717 00:44:07.871885   23443 kubeadm.go:310] 	--control-plane 
	I0717 00:44:07.871895   23443 kubeadm.go:310] 
	I0717 00:44:07.872010   23443 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:44:07.872026   23443 kubeadm.go:310] 
	I0717 00:44:07.872114   23443 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token obton2.k2oggi6v8c13i9u1 \
	I0717 00:44:07.872234   23443 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 
	I0717 00:44:07.872598   23443 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:44:07.872631   23443 cni.go:84] Creating CNI manager for ""
	I0717 00:44:07.872640   23443 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 00:44:07.874487   23443 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 00:44:07.875819   23443 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 00:44:07.881431   23443 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 00:44:07.881446   23443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 00:44:07.900692   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 00:44:08.266080   23443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:44:08.266166   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:08.266166   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-029113 minikube.k8s.io/updated_at=2024_07_17T00_44_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=ha-029113 minikube.k8s.io/primary=true
	I0717 00:44:08.296833   23443 ops.go:34] apiserver oom_adj: -16
	I0717 00:44:08.400872   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:08.901716   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:09.401209   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:09.901018   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:10.401927   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:10.901326   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:11.401637   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:11.901433   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:12.401566   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:12.901145   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:13.401527   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:13.901707   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:14.401171   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:14.901442   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:15.401081   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:15.901648   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:16.400943   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:16.901912   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:17.401787   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:17.901606   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:18.400906   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:18.901072   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:19.401069   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:19.900893   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:20.401221   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:44:20.579996   23443 kubeadm.go:1113] duration metric: took 12.313890698s to wait for elevateKubeSystemPrivileges
	I0717 00:44:20.580025   23443 kubeadm.go:394] duration metric: took 23.944721508s to StartCluster
	I0717 00:44:20.580071   23443 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:20.580158   23443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:44:20.580921   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:20.581135   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:44:20.581164   23443 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:44:20.581191   23443 start.go:241] waiting for startup goroutines ...
	I0717 00:44:20.581195   23443 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 00:44:20.581284   23443 addons.go:69] Setting storage-provisioner=true in profile "ha-029113"
	I0717 00:44:20.581320   23443 addons.go:234] Setting addon storage-provisioner=true in "ha-029113"
	I0717 00:44:20.581334   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:44:20.581384   23443 addons.go:69] Setting default-storageclass=true in profile "ha-029113"
	I0717 00:44:20.581390   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:44:20.581422   23443 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-029113"
	I0717 00:44:20.581900   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:20.581928   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:20.581934   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:20.581959   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:20.597289   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I0717 00:44:20.597294   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35225
	I0717 00:44:20.597847   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:20.597855   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:20.598376   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:20.598395   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:20.598564   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:20.598585   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:20.598775   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:20.598931   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:20.599006   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:44:20.599491   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:20.599535   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:20.601206   23443 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:44:20.601426   23443 kapi.go:59] client config for ha-029113: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt", KeyFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key", CAFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 00:44:20.601991   23443 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 00:44:20.602098   23443 addons.go:234] Setting addon default-storageclass=true in "ha-029113"
	I0717 00:44:20.602127   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:44:20.602358   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:20.602386   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:20.614727   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0717 00:44:20.615263   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:20.615830   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:20.615855   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:20.616213   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:20.616416   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:44:20.616795   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43229
	I0717 00:44:20.617330   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:20.617801   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:20.617818   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:20.618259   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:44:20.618262   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:20.618860   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:20.618899   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:20.620026   23443 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:44:20.621619   23443 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:44:20.621643   23443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:44:20.621669   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:44:20.624841   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:20.625333   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:44:20.625357   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:20.625516   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:44:20.625713   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:44:20.625875   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:44:20.626037   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:44:20.634274   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0717 00:44:20.634621   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:20.635059   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:20.635076   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:20.635431   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:20.635599   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:44:20.636995   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:44:20.637191   23443 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:44:20.637206   23443 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:44:20.637229   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:44:20.640007   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:20.640333   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:44:20.640353   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:20.640524   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:44:20.640691   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:44:20.640820   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:44:20.640943   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:44:20.792736   23443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:44:20.796311   23443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:44:20.796951   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:44:21.485321   23443 main.go:141] libmachine: Making call to close driver server
	I0717 00:44:21.485341   23443 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 00:44:21.485351   23443 main.go:141] libmachine: (ha-029113) Calling .Close
	I0717 00:44:21.485432   23443 main.go:141] libmachine: Making call to close driver server
	I0717 00:44:21.485451   23443 main.go:141] libmachine: (ha-029113) Calling .Close
	I0717 00:44:21.485622   23443 main.go:141] libmachine: (ha-029113) DBG | Closing plugin on server side
	I0717 00:44:21.485663   23443 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:44:21.485665   23443 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:44:21.485670   23443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:44:21.485669   23443 main.go:141] libmachine: (ha-029113) DBG | Closing plugin on server side
	I0717 00:44:21.485674   23443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:44:21.485677   23443 main.go:141] libmachine: Making call to close driver server
	I0717 00:44:21.485684   23443 main.go:141] libmachine: Making call to close driver server
	I0717 00:44:21.485694   23443 main.go:141] libmachine: (ha-029113) Calling .Close
	I0717 00:44:21.485685   23443 main.go:141] libmachine: (ha-029113) Calling .Close
	I0717 00:44:21.485953   23443 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:44:21.485967   23443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:44:21.486100   23443 main.go:141] libmachine: (ha-029113) DBG | Closing plugin on server side
	I0717 00:44:21.486103   23443 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 00:44:21.486121   23443 round_trippers.go:469] Request Headers:
	I0717 00:44:21.486133   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:44:21.486147   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:44:21.486160   23443 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:44:21.486187   23443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:44:21.498880   23443 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0717 00:44:21.499627   23443 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0717 00:44:21.499646   23443 round_trippers.go:469] Request Headers:
	I0717 00:44:21.499657   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:44:21.499665   23443 round_trippers.go:473]     Content-Type: application/json
	I0717 00:44:21.499674   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:44:21.502894   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:44:21.503135   23443 main.go:141] libmachine: Making call to close driver server
	I0717 00:44:21.503156   23443 main.go:141] libmachine: (ha-029113) Calling .Close
	I0717 00:44:21.503406   23443 main.go:141] libmachine: Successfully made call to close driver server
	I0717 00:44:21.503463   23443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 00:44:21.503433   23443 main.go:141] libmachine: (ha-029113) DBG | Closing plugin on server side
	I0717 00:44:21.505002   23443 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 00:44:21.506268   23443 addons.go:510] duration metric: took 925.075935ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 00:44:21.506300   23443 start.go:246] waiting for cluster config update ...
	I0717 00:44:21.506313   23443 start.go:255] writing updated cluster config ...
	I0717 00:44:21.507911   23443 out.go:177] 
	I0717 00:44:21.509205   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:44:21.509268   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:44:21.510815   23443 out.go:177] * Starting "ha-029113-m02" control-plane node in "ha-029113" cluster
	I0717 00:44:21.512134   23443 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:44:21.512152   23443 cache.go:56] Caching tarball of preloaded images
	I0717 00:44:21.512247   23443 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:44:21.512260   23443 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:44:21.512317   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:44:21.512452   23443 start.go:360] acquireMachinesLock for ha-029113-m02: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:44:21.512490   23443 start.go:364] duration metric: took 20.915µs to acquireMachinesLock for "ha-029113-m02"
	I0717 00:44:21.512512   23443 start.go:93] Provisioning new machine with config: &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:44:21.512578   23443 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0717 00:44:21.513984   23443 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:44:21.514056   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:21.514083   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:21.528451   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0717 00:44:21.528886   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:21.529301   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:21.529313   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:21.529577   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:21.529751   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetMachineName
	I0717 00:44:21.529917   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:21.530055   23443 start.go:159] libmachine.API.Create for "ha-029113" (driver="kvm2")
	I0717 00:44:21.530084   23443 client.go:168] LocalClient.Create starting
	I0717 00:44:21.530116   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 00:44:21.530153   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:44:21.530173   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:44:21.530248   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 00:44:21.530276   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:44:21.530294   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:44:21.530320   23443 main.go:141] libmachine: Running pre-create checks...
	I0717 00:44:21.530331   23443 main.go:141] libmachine: (ha-029113-m02) Calling .PreCreateCheck
	I0717 00:44:21.530479   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetConfigRaw
	I0717 00:44:21.530835   23443 main.go:141] libmachine: Creating machine...
	I0717 00:44:21.530849   23443 main.go:141] libmachine: (ha-029113-m02) Calling .Create
	I0717 00:44:21.531028   23443 main.go:141] libmachine: (ha-029113-m02) Creating KVM machine...
	I0717 00:44:21.532150   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found existing default KVM network
	I0717 00:44:21.532268   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found existing private KVM network mk-ha-029113
	I0717 00:44:21.532406   23443 main.go:141] libmachine: (ha-029113-m02) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02 ...
	I0717 00:44:21.532429   23443 main.go:141] libmachine: (ha-029113-m02) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 00:44:21.532470   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:21.532393   23838 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:44:21.532543   23443 main.go:141] libmachine: (ha-029113-m02) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 00:44:21.765492   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:21.765372   23838 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa...
	I0717 00:44:21.922150   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:21.922049   23838 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/ha-029113-m02.rawdisk...
	I0717 00:44:21.922172   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Writing magic tar header
	I0717 00:44:21.922181   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Writing SSH key tar header
	I0717 00:44:21.922240   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:21.922175   23838 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02 ...
	I0717 00:44:21.922295   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02
	I0717 00:44:21.922312   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02 (perms=drwx------)
	I0717 00:44:21.922339   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 00:44:21.922354   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:44:21.922366   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:44:21.922378   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 00:44:21.922386   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:44:21.922395   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:44:21.922400   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Checking permissions on dir: /home
	I0717 00:44:21.922412   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Skipping /home - not owner
	I0717 00:44:21.922435   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 00:44:21.922457   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 00:44:21.922477   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:44:21.922493   23443 main.go:141] libmachine: (ha-029113-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:44:21.922509   23443 main.go:141] libmachine: (ha-029113-m02) Creating domain...
	I0717 00:44:21.923553   23443 main.go:141] libmachine: (ha-029113-m02) define libvirt domain using xml: 
	I0717 00:44:21.923570   23443 main.go:141] libmachine: (ha-029113-m02) <domain type='kvm'>
	I0717 00:44:21.923580   23443 main.go:141] libmachine: (ha-029113-m02)   <name>ha-029113-m02</name>
	I0717 00:44:21.923588   23443 main.go:141] libmachine: (ha-029113-m02)   <memory unit='MiB'>2200</memory>
	I0717 00:44:21.923599   23443 main.go:141] libmachine: (ha-029113-m02)   <vcpu>2</vcpu>
	I0717 00:44:21.923609   23443 main.go:141] libmachine: (ha-029113-m02)   <features>
	I0717 00:44:21.923618   23443 main.go:141] libmachine: (ha-029113-m02)     <acpi/>
	I0717 00:44:21.923628   23443 main.go:141] libmachine: (ha-029113-m02)     <apic/>
	I0717 00:44:21.923637   23443 main.go:141] libmachine: (ha-029113-m02)     <pae/>
	I0717 00:44:21.923647   23443 main.go:141] libmachine: (ha-029113-m02)     
	I0717 00:44:21.923653   23443 main.go:141] libmachine: (ha-029113-m02)   </features>
	I0717 00:44:21.923663   23443 main.go:141] libmachine: (ha-029113-m02)   <cpu mode='host-passthrough'>
	I0717 00:44:21.923690   23443 main.go:141] libmachine: (ha-029113-m02)   
	I0717 00:44:21.923711   23443 main.go:141] libmachine: (ha-029113-m02)   </cpu>
	I0717 00:44:21.923721   23443 main.go:141] libmachine: (ha-029113-m02)   <os>
	I0717 00:44:21.923730   23443 main.go:141] libmachine: (ha-029113-m02)     <type>hvm</type>
	I0717 00:44:21.923739   23443 main.go:141] libmachine: (ha-029113-m02)     <boot dev='cdrom'/>
	I0717 00:44:21.923750   23443 main.go:141] libmachine: (ha-029113-m02)     <boot dev='hd'/>
	I0717 00:44:21.923771   23443 main.go:141] libmachine: (ha-029113-m02)     <bootmenu enable='no'/>
	I0717 00:44:21.923784   23443 main.go:141] libmachine: (ha-029113-m02)   </os>
	I0717 00:44:21.923794   23443 main.go:141] libmachine: (ha-029113-m02)   <devices>
	I0717 00:44:21.923804   23443 main.go:141] libmachine: (ha-029113-m02)     <disk type='file' device='cdrom'>
	I0717 00:44:21.923820   23443 main.go:141] libmachine: (ha-029113-m02)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/boot2docker.iso'/>
	I0717 00:44:21.923831   23443 main.go:141] libmachine: (ha-029113-m02)       <target dev='hdc' bus='scsi'/>
	I0717 00:44:21.923843   23443 main.go:141] libmachine: (ha-029113-m02)       <readonly/>
	I0717 00:44:21.923854   23443 main.go:141] libmachine: (ha-029113-m02)     </disk>
	I0717 00:44:21.923866   23443 main.go:141] libmachine: (ha-029113-m02)     <disk type='file' device='disk'>
	I0717 00:44:21.923877   23443 main.go:141] libmachine: (ha-029113-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:44:21.923890   23443 main.go:141] libmachine: (ha-029113-m02)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/ha-029113-m02.rawdisk'/>
	I0717 00:44:21.923901   23443 main.go:141] libmachine: (ha-029113-m02)       <target dev='hda' bus='virtio'/>
	I0717 00:44:21.923913   23443 main.go:141] libmachine: (ha-029113-m02)     </disk>
	I0717 00:44:21.923923   23443 main.go:141] libmachine: (ha-029113-m02)     <interface type='network'>
	I0717 00:44:21.923932   23443 main.go:141] libmachine: (ha-029113-m02)       <source network='mk-ha-029113'/>
	I0717 00:44:21.923944   23443 main.go:141] libmachine: (ha-029113-m02)       <model type='virtio'/>
	I0717 00:44:21.923955   23443 main.go:141] libmachine: (ha-029113-m02)     </interface>
	I0717 00:44:21.923964   23443 main.go:141] libmachine: (ha-029113-m02)     <interface type='network'>
	I0717 00:44:21.923970   23443 main.go:141] libmachine: (ha-029113-m02)       <source network='default'/>
	I0717 00:44:21.923980   23443 main.go:141] libmachine: (ha-029113-m02)       <model type='virtio'/>
	I0717 00:44:21.923991   23443 main.go:141] libmachine: (ha-029113-m02)     </interface>
	I0717 00:44:21.923999   23443 main.go:141] libmachine: (ha-029113-m02)     <serial type='pty'>
	I0717 00:44:21.924019   23443 main.go:141] libmachine: (ha-029113-m02)       <target port='0'/>
	I0717 00:44:21.924037   23443 main.go:141] libmachine: (ha-029113-m02)     </serial>
	I0717 00:44:21.924050   23443 main.go:141] libmachine: (ha-029113-m02)     <console type='pty'>
	I0717 00:44:21.924061   23443 main.go:141] libmachine: (ha-029113-m02)       <target type='serial' port='0'/>
	I0717 00:44:21.924076   23443 main.go:141] libmachine: (ha-029113-m02)     </console>
	I0717 00:44:21.924087   23443 main.go:141] libmachine: (ha-029113-m02)     <rng model='virtio'>
	I0717 00:44:21.924096   23443 main.go:141] libmachine: (ha-029113-m02)       <backend model='random'>/dev/random</backend>
	I0717 00:44:21.924106   23443 main.go:141] libmachine: (ha-029113-m02)     </rng>
	I0717 00:44:21.924115   23443 main.go:141] libmachine: (ha-029113-m02)     
	I0717 00:44:21.924121   23443 main.go:141] libmachine: (ha-029113-m02)     
	I0717 00:44:21.924128   23443 main.go:141] libmachine: (ha-029113-m02)   </devices>
	I0717 00:44:21.924134   23443 main.go:141] libmachine: (ha-029113-m02) </domain>
	I0717 00:44:21.924140   23443 main.go:141] libmachine: (ha-029113-m02) 
	I0717 00:44:21.930425   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:a0:6d:db in network default
	I0717 00:44:21.930927   23443 main.go:141] libmachine: (ha-029113-m02) Ensuring networks are active...
	I0717 00:44:21.930944   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:21.931531   23443 main.go:141] libmachine: (ha-029113-m02) Ensuring network default is active
	I0717 00:44:21.931817   23443 main.go:141] libmachine: (ha-029113-m02) Ensuring network mk-ha-029113 is active
	I0717 00:44:21.932164   23443 main.go:141] libmachine: (ha-029113-m02) Getting domain xml...
	I0717 00:44:21.932753   23443 main.go:141] libmachine: (ha-029113-m02) Creating domain...
	I0717 00:44:23.126388   23443 main.go:141] libmachine: (ha-029113-m02) Waiting to get IP...
	I0717 00:44:23.127189   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:23.127582   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:23.127605   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:23.127566   23838 retry.go:31] will retry after 306.500754ms: waiting for machine to come up
	I0717 00:44:23.436071   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:23.436493   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:23.436520   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:23.436452   23838 retry.go:31] will retry after 297.727134ms: waiting for machine to come up
	I0717 00:44:23.735908   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:23.736335   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:23.736363   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:23.736293   23838 retry.go:31] will retry after 313.394137ms: waiting for machine to come up
	I0717 00:44:24.051746   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:24.052195   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:24.052223   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:24.052166   23838 retry.go:31] will retry after 561.781093ms: waiting for machine to come up
	I0717 00:44:24.615446   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:24.615952   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:24.615975   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:24.615908   23838 retry.go:31] will retry after 656.549737ms: waiting for machine to come up
	I0717 00:44:25.273656   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:25.273998   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:25.274019   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:25.273966   23838 retry.go:31] will retry after 750.278987ms: waiting for machine to come up
	I0717 00:44:26.025760   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:26.026236   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:26.026257   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:26.026209   23838 retry.go:31] will retry after 963.408722ms: waiting for machine to come up
	I0717 00:44:26.991510   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:26.991951   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:26.992003   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:26.991922   23838 retry.go:31] will retry after 968.074979ms: waiting for machine to come up
	I0717 00:44:27.961278   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:27.961695   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:27.961730   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:27.961649   23838 retry.go:31] will retry after 1.855272264s: waiting for machine to come up
	I0717 00:44:29.819666   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:29.820060   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:29.820104   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:29.820014   23838 retry.go:31] will retry after 1.882719972s: waiting for machine to come up
	I0717 00:44:31.704098   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:31.704494   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:31.704523   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:31.704445   23838 retry.go:31] will retry after 2.138087395s: waiting for machine to come up
	I0717 00:44:33.843885   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:33.844361   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:33.844378   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:33.844328   23838 retry.go:31] will retry after 2.441061484s: waiting for machine to come up
	I0717 00:44:36.288764   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:36.289090   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:36.289114   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:36.289064   23838 retry.go:31] will retry after 2.940582098s: waiting for machine to come up
	I0717 00:44:39.233237   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:39.233595   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find current IP address of domain ha-029113-m02 in network mk-ha-029113
	I0717 00:44:39.233619   23443 main.go:141] libmachine: (ha-029113-m02) DBG | I0717 00:44:39.233567   23838 retry.go:31] will retry after 5.314621397s: waiting for machine to come up
	I0717 00:44:44.549835   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.550210   23443 main.go:141] libmachine: (ha-029113-m02) Found IP for machine: 192.168.39.166
	I0717 00:44:44.550236   23443 main.go:141] libmachine: (ha-029113-m02) Reserving static IP address...
	I0717 00:44:44.550250   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has current primary IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.550599   23443 main.go:141] libmachine: (ha-029113-m02) DBG | unable to find host DHCP lease matching {name: "ha-029113-m02", mac: "52:54:00:57:08:5b", ip: "192.168.39.166"} in network mk-ha-029113
	I0717 00:44:44.619403   23443 main.go:141] libmachine: (ha-029113-m02) Reserved static IP address: 192.168.39.166
	I0717 00:44:44.619427   23443 main.go:141] libmachine: (ha-029113-m02) Waiting for SSH to be available...
	I0717 00:44:44.619436   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Getting to WaitForSSH function...
	I0717 00:44:44.621871   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.622215   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:minikube Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:44.622241   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.622389   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Using SSH client type: external
	I0717 00:44:44.622414   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa (-rw-------)
	I0717 00:44:44.622442   23443 main.go:141] libmachine: (ha-029113-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:44:44.622455   23443 main.go:141] libmachine: (ha-029113-m02) DBG | About to run SSH command:
	I0717 00:44:44.622467   23443 main.go:141] libmachine: (ha-029113-m02) DBG | exit 0
	I0717 00:44:44.754376   23443 main.go:141] libmachine: (ha-029113-m02) DBG | SSH cmd err, output: <nil>: 
	I0717 00:44:44.754594   23443 main.go:141] libmachine: (ha-029113-m02) KVM machine creation complete!
	I0717 00:44:44.754938   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetConfigRaw
	I0717 00:44:44.755465   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:44.755620   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:44.755740   23443 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:44:44.755753   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 00:44:44.757016   23443 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:44:44.757026   23443 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:44:44.757033   23443 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:44:44.757038   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:44.759322   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.759651   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:44.759678   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.759829   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:44.760022   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.760202   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.760352   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:44.760520   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:44.760749   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:44.760761   23443 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:44:44.873910   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:44:44.873931   23443 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:44:44.873937   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:44.876534   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.876879   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:44.876904   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.877036   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:44.877219   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.877369   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.877502   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:44.877652   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:44.877812   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:44.877822   23443 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:44:44.991371   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:44:44.991435   23443 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:44:44.991444   23443 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:44:44.991456   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetMachineName
	I0717 00:44:44.991672   23443 buildroot.go:166] provisioning hostname "ha-029113-m02"
	I0717 00:44:44.991701   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetMachineName
	I0717 00:44:44.991897   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:44.994066   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.994457   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:44.994482   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:44.994602   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:44.994757   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.994909   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:44.995065   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:44.995201   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:44.995355   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:44.995366   23443 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-029113-m02 && echo "ha-029113-m02" | sudo tee /etc/hostname
	I0717 00:44:45.121167   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-029113-m02
	
	I0717 00:44:45.121194   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.123822   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.124130   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.124151   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.124376   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:45.124579   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.124736   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.124907   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:45.125056   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:45.125227   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:45.125248   23443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-029113-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-029113-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-029113-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:44:45.247111   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:44:45.247142   23443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 00:44:45.247156   23443 buildroot.go:174] setting up certificates
	I0717 00:44:45.247166   23443 provision.go:84] configureAuth start
	I0717 00:44:45.247174   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetMachineName
	I0717 00:44:45.247435   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:44:45.249911   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.250229   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.250248   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.250396   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.252384   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.252705   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.252731   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.252831   23443 provision.go:143] copyHostCerts
	I0717 00:44:45.252867   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:44:45.252906   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 00:44:45.252920   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:44:45.253000   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 00:44:45.253079   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:44:45.253096   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 00:44:45.253103   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:44:45.253127   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 00:44:45.253170   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:44:45.253191   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 00:44:45.253199   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:44:45.253231   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 00:44:45.253298   23443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.ha-029113-m02 san=[127.0.0.1 192.168.39.166 ha-029113-m02 localhost minikube]
	I0717 00:44:45.367486   23443 provision.go:177] copyRemoteCerts
	I0717 00:44:45.367538   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:44:45.367560   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.370013   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.370345   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.370381   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.370536   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:45.370734   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.370903   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:45.371017   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	I0717 00:44:45.461167   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:44:45.461229   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:44:45.485049   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:44:45.485112   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:44:45.508303   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:44:45.508387   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:44:45.531564   23443 provision.go:87] duration metric: took 284.384948ms to configureAuth
	I0717 00:44:45.531592   23443 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:44:45.531797   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:44:45.531875   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.534512   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.534941   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.534970   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.535160   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:45.535346   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.535524   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.535686   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:45.535844   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:45.536052   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:45.536085   23443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:44:45.806422   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:44:45.806448   23443 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:44:45.806458   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetURL
	I0717 00:44:45.807725   23443 main.go:141] libmachine: (ha-029113-m02) DBG | Using libvirt version 6000000
	I0717 00:44:45.809981   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.810324   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.810348   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.810541   23443 main.go:141] libmachine: Docker is up and running!
	I0717 00:44:45.810569   23443 main.go:141] libmachine: Reticulating splines...
	I0717 00:44:45.810578   23443 client.go:171] duration metric: took 24.280485852s to LocalClient.Create
	I0717 00:44:45.810601   23443 start.go:167] duration metric: took 24.280544833s to libmachine.API.Create "ha-029113"
	I0717 00:44:45.810611   23443 start.go:293] postStartSetup for "ha-029113-m02" (driver="kvm2")
	I0717 00:44:45.810619   23443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:44:45.810635   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:45.810871   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:44:45.810896   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.813010   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.813352   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.813372   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.813564   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:45.813759   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.813918   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:45.814075   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	I0717 00:44:45.901434   23443 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:44:45.905704   23443 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:44:45.905724   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 00:44:45.905775   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 00:44:45.905840   23443 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 00:44:45.905849   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /etc/ssl/certs/112592.pem
	I0717 00:44:45.905924   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:44:45.915672   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:44:45.938347   23443 start.go:296] duration metric: took 127.724614ms for postStartSetup
	I0717 00:44:45.938389   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetConfigRaw
	I0717 00:44:45.938915   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:44:45.941473   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.941818   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.941844   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.942090   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:44:45.942259   23443 start.go:128] duration metric: took 24.429673631s to createHost
	I0717 00:44:45.942279   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:45.944493   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.944885   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:45.944909   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:45.945027   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:45.945193   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.945299   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:45.945443   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:45.945569   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:45.945753   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0717 00:44:45.945765   23443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:44:46.059255   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177086.035700096
	
	I0717 00:44:46.059279   23443 fix.go:216] guest clock: 1721177086.035700096
	I0717 00:44:46.059289   23443 fix.go:229] Guest: 2024-07-17 00:44:46.035700096 +0000 UTC Remote: 2024-07-17 00:44:45.942268698 +0000 UTC m=+76.344543852 (delta=93.431398ms)
	I0717 00:44:46.059314   23443 fix.go:200] guest clock delta is within tolerance: 93.431398ms
	I0717 00:44:46.059319   23443 start.go:83] releasing machines lock for "ha-029113-m02", held for 24.546818872s
	I0717 00:44:46.059337   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:46.059590   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:44:46.062135   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.062416   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:46.062441   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.064914   23443 out.go:177] * Found network options:
	I0717 00:44:46.066490   23443 out.go:177]   - NO_PROXY=192.168.39.95
	W0717 00:44:46.067961   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:44:46.067994   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:46.068503   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:46.068668   23443 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 00:44:46.068765   23443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:44:46.068802   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	W0717 00:44:46.068999   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:44:46.069064   23443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:44:46.069085   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 00:44:46.071597   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.071818   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.072006   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:46.072031   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.072154   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:46.072162   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:46.072181   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:46.072316   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:46.072367   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 00:44:46.072469   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:46.072548   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 00:44:46.072858   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 00:44:46.072857   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	I0717 00:44:46.073026   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	I0717 00:44:46.312405   23443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:44:46.318777   23443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:44:46.318828   23443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:44:46.334305   23443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:44:46.334321   23443 start.go:495] detecting cgroup driver to use...
	I0717 00:44:46.334378   23443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:44:46.349642   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:44:46.363703   23443 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:44:46.363741   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:44:46.377732   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:44:46.391523   23443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:44:46.511229   23443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:44:46.672516   23443 docker.go:233] disabling docker service ...
	I0717 00:44:46.672571   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:44:46.687542   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:44:46.701406   23443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:44:46.824789   23443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:44:46.940462   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:44:46.955830   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:44:46.974487   23443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:44:46.974541   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:46.984766   23443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:44:46.984828   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:46.994802   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:47.004509   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:47.014241   23443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:44:47.024510   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:47.034748   23443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:47.051448   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:44:47.061198   23443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:44:47.070140   23443 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:44:47.070187   23443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:44:47.083255   23443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:44:47.092470   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:44:47.206987   23443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:44:47.343140   23443 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:44:47.343196   23443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:44:47.348111   23443 start.go:563] Will wait 60s for crictl version
	I0717 00:44:47.348154   23443 ssh_runner.go:195] Run: which crictl
	I0717 00:44:47.351750   23443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:44:47.391937   23443 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:44:47.392030   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:44:47.418173   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:44:47.450323   23443 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:44:47.451753   23443 out.go:177]   - env NO_PROXY=192.168.39.95
	I0717 00:44:47.452947   23443 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 00:44:47.455382   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:47.455715   23443 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:44:35 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 00:44:47.455745   23443 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 00:44:47.455939   23443 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:44:47.460382   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:44:47.473520   23443 mustload.go:65] Loading cluster: ha-029113
	I0717 00:44:47.473743   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:44:47.474009   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:47.474044   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:47.488577   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I0717 00:44:47.488983   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:47.489429   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:47.489453   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:47.489783   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:47.489987   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:44:47.491527   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:44:47.491848   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:44:47.491884   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:44:47.506250   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0717 00:44:47.506667   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:44:47.507096   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:44:47.507113   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:44:47.507387   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:44:47.507554   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:44:47.507703   23443 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113 for IP: 192.168.39.166
	I0717 00:44:47.507715   23443 certs.go:194] generating shared ca certs ...
	I0717 00:44:47.507727   23443 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:47.507847   23443 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 00:44:47.507881   23443 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 00:44:47.507889   23443 certs.go:256] generating profile certs ...
	I0717 00:44:47.507963   23443 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key
	I0717 00:44:47.507984   23443 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.9ce6be2b
	I0717 00:44:47.507997   23443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.9ce6be2b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.95 192.168.39.166 192.168.39.254]
	I0717 00:44:47.577327   23443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.9ce6be2b ...
	I0717 00:44:47.577354   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.9ce6be2b: {Name:mk3f595e3dd15d8a18c9e4b6cfe842899acd5768 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:47.577527   23443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.9ce6be2b ...
	I0717 00:44:47.577546   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.9ce6be2b: {Name:mkb6a95690716dce45479bd0140a631685524c54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:47.577638   23443 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.9ce6be2b -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt
	I0717 00:44:47.577799   23443 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.9ce6be2b -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key
	I0717 00:44:47.577965   23443 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key
	I0717 00:44:47.577983   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:44:47.578000   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:44:47.578019   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:44:47.578037   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:44:47.578054   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:44:47.578069   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:44:47.578084   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:44:47.578105   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:44:47.578165   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 00:44:47.578205   23443 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 00:44:47.578217   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:44:47.578249   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:44:47.578277   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:44:47.578306   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 00:44:47.578360   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:44:47.578407   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /usr/share/ca-certificates/112592.pem
	I0717 00:44:47.578428   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:47.578444   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem -> /usr/share/ca-certificates/11259.pem
	I0717 00:44:47.578486   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:44:47.581366   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:47.581763   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:44:47.581793   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:44:47.581925   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:44:47.582099   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:44:47.582232   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:44:47.582369   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:44:47.650909   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 00:44:47.655905   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 00:44:47.669701   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 00:44:47.674392   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 00:44:47.685145   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 00:44:47.689313   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 00:44:47.699759   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 00:44:47.703880   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 00:44:47.714787   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 00:44:47.718807   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 00:44:47.730025   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 00:44:47.733952   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 00:44:47.744715   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:44:47.769348   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:44:47.791962   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:44:47.813987   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:44:47.836524   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 00:44:47.858849   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:44:47.882004   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:44:47.905053   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:44:47.927456   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 00:44:47.949565   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:44:47.971731   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 00:44:47.993759   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 00:44:48.011244   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 00:44:48.028584   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 00:44:48.046431   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 00:44:48.063901   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 00:44:48.081546   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 00:44:48.098796   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 00:44:48.116223   23443 ssh_runner.go:195] Run: openssl version
	I0717 00:44:48.121710   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 00:44:48.133256   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 00:44:48.137547   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 00:44:48.137590   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 00:44:48.143006   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:44:48.153361   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:44:48.163661   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:48.167728   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:48.167771   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:48.172985   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:44:48.183502   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 00:44:48.193639   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 00:44:48.198007   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 00:44:48.198051   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 00:44:48.203472   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 00:44:48.214228   23443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:44:48.218011   23443 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:44:48.218057   23443 kubeadm.go:934] updating node {m02 192.168.39.166 8443 v1.30.2 crio true true} ...
	I0717 00:44:48.218124   23443 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-029113-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:44:48.218145   23443 kube-vip.go:115] generating kube-vip config ...
	I0717 00:44:48.218170   23443 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:44:48.235840   23443 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:44:48.235918   23443 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:44:48.235966   23443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:44:48.245971   23443 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 00:44:48.246012   23443 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 00:44:48.256116   23443 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 00:44:48.256148   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:44:48.256183   23443 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0717 00:44:48.256205   23443 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0717 00:44:48.256217   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:44:48.260498   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 00:44:48.260520   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 00:45:30.425423   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:45:30.425504   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:45:30.432852   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 00:45:30.432882   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 00:46:16.676403   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:46:16.692621   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:46:16.692752   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:46:16.697375   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 00:46:16.697402   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 00:46:17.071538   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 00:46:17.081503   23443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 00:46:17.099148   23443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:46:17.116667   23443 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:46:17.133894   23443 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:46:17.138280   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:46:17.151248   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:46:17.272941   23443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:46:17.290512   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:46:17.290911   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:46:17.290948   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:46:17.306307   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0717 00:46:17.306772   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:46:17.307306   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:46:17.307333   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:46:17.307632   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:46:17.307815   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:46:17.307973   23443 start.go:317] joinCluster: &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:46:17.308077   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 00:46:17.308091   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:46:17.311008   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:46:17.311389   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:46:17.311411   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:46:17.311609   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:46:17.311866   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:46:17.312017   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:46:17.312170   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:46:17.473835   23443 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:46:17.473894   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qxsifa.szb8lo03p23cph9a --discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-029113-m02 --control-plane --apiserver-advertise-address=192.168.39.166 --apiserver-bind-port=8443"
	I0717 00:46:39.394930   23443 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qxsifa.szb8lo03p23cph9a --discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-029113-m02 --control-plane --apiserver-advertise-address=192.168.39.166 --apiserver-bind-port=8443": (21.920994841s)
	I0717 00:46:39.394975   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 00:46:39.825420   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-029113-m02 minikube.k8s.io/updated_at=2024_07_17T00_46_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=ha-029113 minikube.k8s.io/primary=false
	I0717 00:46:39.944995   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-029113-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 00:46:40.064513   23443 start.go:319] duration metric: took 22.75653534s to joinCluster
	I0717 00:46:40.064615   23443 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:46:40.064937   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:46:40.066305   23443 out.go:177] * Verifying Kubernetes components...
	I0717 00:46:40.067294   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:46:40.254167   23443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:46:40.283487   23443 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:46:40.283835   23443 kapi.go:59] client config for ha-029113: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt", KeyFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key", CAFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 00:46:40.283927   23443 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.95:8443
	I0717 00:46:40.284215   23443 node_ready.go:35] waiting up to 6m0s for node "ha-029113-m02" to be "Ready" ...
	I0717 00:46:40.284345   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:40.284358   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:40.284371   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:40.284375   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:40.298345   23443 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0717 00:46:40.785405   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:40.785428   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:40.785437   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:40.785441   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:40.789173   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:41.285260   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:41.285283   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:41.285293   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:41.285298   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:41.289165   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:41.784508   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:41.784533   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:41.784540   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:41.784546   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:41.787864   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:42.285159   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:42.285186   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:42.285196   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:42.285201   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:42.288243   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:42.288953   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:42.784846   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:42.784883   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:42.784897   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:42.784902   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:42.789853   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:46:43.284594   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:43.284618   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:43.284628   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:43.284633   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:43.288162   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:43.785076   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:43.785096   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:43.785105   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:43.785108   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:43.789071   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:44.284682   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:44.284702   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:44.284709   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:44.284714   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:44.288040   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:44.784655   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:44.784675   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:44.784683   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:44.784686   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:44.787807   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:44.788647   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:45.285188   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:45.285214   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:45.285222   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:45.285226   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:45.288258   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:45.785228   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:45.785250   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:45.785258   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:45.785262   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:45.788864   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:46.285064   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:46.285086   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:46.285096   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:46.285104   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:46.288877   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:46.785321   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:46.785345   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:46.785356   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:46.785365   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:46.788427   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:46.789072   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:47.284430   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:47.284456   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:47.284466   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:47.284471   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:47.287994   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:47.785131   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:47.785152   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:47.785159   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:47.785163   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:47.788266   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:48.285203   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:48.285222   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:48.285229   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:48.285234   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:48.288790   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:48.784460   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:48.784482   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:48.784490   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:48.784495   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:48.787573   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:49.284601   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:49.284622   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:49.284634   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:49.284643   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:49.288480   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:49.289249   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:49.785350   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:49.785373   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:49.785384   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:49.785392   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:49.788492   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:50.285416   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:50.285437   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:50.285445   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:50.285450   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:50.288808   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:50.785052   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:50.785072   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:50.785080   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:50.785086   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:50.788606   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:51.285137   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:51.285159   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:51.285167   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:51.285171   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:51.288279   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:51.784648   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:51.784668   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:51.784677   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:51.784682   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:51.787854   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:51.788548   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:52.284844   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:52.284865   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:52.284873   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:52.284877   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:52.288326   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:52.784372   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:52.784393   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:52.784404   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:52.784407   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:52.787594   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:53.284770   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:53.284788   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:53.284797   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:53.284800   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:53.287700   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:46:53.784806   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:53.784831   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:53.784843   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:53.784850   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:53.788358   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:53.788974   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:54.284992   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:54.285014   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:54.285023   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:54.285028   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:54.288147   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:54.784702   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:54.784724   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:54.784731   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:54.784737   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:54.788084   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:55.285150   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:55.285180   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:55.285190   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:55.285195   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:55.288527   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:55.785452   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:55.785473   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:55.785481   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:55.785486   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:55.788704   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:55.789401   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:56.284802   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:56.284821   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:56.284830   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:56.284835   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:56.288441   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:56.784811   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:56.784837   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:56.784848   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:56.784854   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:56.788360   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:57.284771   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:57.284793   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:57.284801   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:57.284805   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:57.288469   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:57.784918   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:57.784943   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:57.784955   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:57.784963   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:57.787851   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:46:58.284624   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:58.284648   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:58.284658   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:58.284664   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:58.287842   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:58.288449   23443 node_ready.go:53] node "ha-029113-m02" has status "Ready":"False"
	I0717 00:46:58.784857   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:58.784876   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:58.784883   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:58.784887   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:58.787484   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:46:59.284489   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:59.284509   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:59.284516   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:59.284520   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:59.287792   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:46:59.784365   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:46:59.784395   23443 round_trippers.go:469] Request Headers:
	I0717 00:46:59.784403   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:46:59.784408   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:46:59.787927   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:00.285086   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:00.285110   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.285117   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.285121   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.288385   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:00.288926   23443 node_ready.go:49] node "ha-029113-m02" has status "Ready":"True"
	I0717 00:47:00.288943   23443 node_ready.go:38] duration metric: took 20.004703741s for node "ha-029113-m02" to be "Ready" ...
	I0717 00:47:00.288950   23443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:47:00.289029   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:47:00.289037   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.289045   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.289050   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.296020   23443 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:47:00.302225   23443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.302297   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-62m67
	I0717 00:47:00.302309   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.302319   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.302327   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.305104   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.305672   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:00.305685   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.305692   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.305696   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.308163   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.308719   23443 pod_ready.go:92] pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:00.308733   23443 pod_ready.go:81] duration metric: took 6.486043ms for pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.308741   23443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.308788   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdlls
	I0717 00:47:00.308795   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.308802   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.308805   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.311143   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.311613   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:00.311626   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.311632   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.311636   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.313674   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.314129   23443 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:00.314143   23443 pod_ready.go:81] duration metric: took 5.396922ms for pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.314150   23443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.314186   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113
	I0717 00:47:00.314193   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.314199   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.314204   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.316320   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.316917   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:00.316928   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.316934   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.316937   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.319330   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.319788   23443 pod_ready.go:92] pod "etcd-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:00.319802   23443 pod_ready.go:81] duration metric: took 5.646782ms for pod "etcd-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.319808   23443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.319852   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113-m02
	I0717 00:47:00.319862   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.319871   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.319878   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.322504   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.323427   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:00.323439   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.323446   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.323450   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.325614   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:00.326018   23443 pod_ready.go:92] pod "etcd-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:00.326035   23443 pod_ready.go:81] duration metric: took 6.219819ms for pod "etcd-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.326048   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.485438   23443 request.go:629] Waited for 159.341918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113
	I0717 00:47:00.485524   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113
	I0717 00:47:00.485534   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.485542   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.485549   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.489065   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:00.685968   23443 request.go:629] Waited for 196.009264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:00.686028   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:00.686046   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.686055   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.686060   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.689388   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:00.689984   23443 pod_ready.go:92] pod "kube-apiserver-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:00.689999   23443 pod_ready.go:81] duration metric: took 363.94506ms for pod "kube-apiserver-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.690009   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:00.885313   23443 request.go:629] Waited for 195.246505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m02
	I0717 00:47:00.885373   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m02
	I0717 00:47:00.885378   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:00.885383   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:00.885386   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:00.888552   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.085428   23443 request.go:629] Waited for 196.22971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:01.085503   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:01.085508   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:01.085516   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:01.085519   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:01.089022   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.089673   23443 pod_ready.go:92] pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:01.089690   23443 pod_ready.go:81] duration metric: took 399.675191ms for pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:01.089699   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:01.285788   23443 request.go:629] Waited for 196.037905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113
	I0717 00:47:01.285850   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113
	I0717 00:47:01.285858   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:01.285868   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:01.285875   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:01.288963   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.485823   23443 request.go:629] Waited for 196.363674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:01.485905   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:01.485913   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:01.485923   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:01.485932   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:01.489211   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.489725   23443 pod_ready.go:92] pod "kube-controller-manager-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:01.489750   23443 pod_ready.go:81] duration metric: took 400.046262ms for pod "kube-controller-manager-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:01.489760   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:01.685086   23443 request.go:629] Waited for 195.254717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m02
	I0717 00:47:01.685161   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m02
	I0717 00:47:01.685170   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:01.685178   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:01.685183   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:01.688673   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.885694   23443 request.go:629] Waited for 196.329757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:01.885755   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:01.885760   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:01.885767   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:01.885772   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:01.888957   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:01.889401   23443 pod_ready.go:92] pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:01.889418   23443 pod_ready.go:81] duration metric: took 399.652066ms for pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:01.889427   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wz5p" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:02.085632   23443 request.go:629] Waited for 196.139901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wz5p
	I0717 00:47:02.085691   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wz5p
	I0717 00:47:02.085698   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:02.085707   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:02.085714   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:02.089129   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:02.286065   23443 request.go:629] Waited for 196.382564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:02.286129   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:02.286137   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:02.286146   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:02.286153   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:02.289793   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:02.290283   23443 pod_ready.go:92] pod "kube-proxy-2wz5p" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:02.290308   23443 pod_ready.go:81] duration metric: took 400.873927ms for pod "kube-proxy-2wz5p" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:02.290322   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hg2kp" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:02.485968   23443 request.go:629] Waited for 195.585298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2kp
	I0717 00:47:02.486038   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2kp
	I0717 00:47:02.486044   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:02.486051   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:02.486054   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:02.489411   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:02.685828   23443 request.go:629] Waited for 195.861626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:02.685879   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:02.685884   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:02.685892   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:02.685895   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:02.689465   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:02.689972   23443 pod_ready.go:92] pod "kube-proxy-hg2kp" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:02.689993   23443 pod_ready.go:81] duration metric: took 399.664283ms for pod "kube-proxy-hg2kp" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:02.690002   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:02.885138   23443 request.go:629] Waited for 195.073995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113
	I0717 00:47:02.885208   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113
	I0717 00:47:02.885215   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:02.885230   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:02.885239   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:02.888801   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:03.085815   23443 request.go:629] Waited for 196.390923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:03.085861   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:47:03.085866   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.085875   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.085881   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.089147   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:03.089749   23443 pod_ready.go:92] pod "kube-scheduler-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:03.089775   23443 pod_ready.go:81] duration metric: took 399.765556ms for pod "kube-scheduler-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:03.089789   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:03.285832   23443 request.go:629] Waited for 195.977772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m02
	I0717 00:47:03.285902   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m02
	I0717 00:47:03.285909   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.285918   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.285935   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.289075   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:03.485171   23443 request.go:629] Waited for 195.292447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:03.485219   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:47:03.485224   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.485231   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.485235   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.488367   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:03.488968   23443 pod_ready.go:92] pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:47:03.488991   23443 pod_ready.go:81] duration metric: took 399.189538ms for pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:47:03.489003   23443 pod_ready.go:38] duration metric: took 3.200018447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:47:03.489020   23443 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:47:03.489081   23443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:47:03.508331   23443 api_server.go:72] duration metric: took 23.443679601s to wait for apiserver process to appear ...
	I0717 00:47:03.508351   23443 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:47:03.508367   23443 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0717 00:47:03.512924   23443 api_server.go:279] https://192.168.39.95:8443/healthz returned 200:
	ok
	I0717 00:47:03.512977   23443 round_trippers.go:463] GET https://192.168.39.95:8443/version
	I0717 00:47:03.512984   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.512998   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.513006   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.513923   23443 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 00:47:03.514022   23443 api_server.go:141] control plane version: v1.30.2
	I0717 00:47:03.514040   23443 api_server.go:131] duration metric: took 5.683875ms to wait for apiserver health ...
	I0717 00:47:03.514049   23443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:47:03.685451   23443 request.go:629] Waited for 171.349564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:47:03.685523   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:47:03.685532   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.685540   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.685547   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.692926   23443 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:47:03.697988   23443 system_pods.go:59] 17 kube-system pods found
	I0717 00:47:03.698021   23443 system_pods.go:61] "coredns-7db6d8ff4d-62m67" [5029f9dc-6792-44d9-9296-ec5ab6d72274] Running
	I0717 00:47:03.698028   23443 system_pods.go:61] "coredns-7db6d8ff4d-xdlls" [4344b971-b979-42f8-8fa8-01f2d64bb51a] Running
	I0717 00:47:03.698031   23443 system_pods.go:61] "etcd-ha-029113" [10122569-9dc1-4680-8d11-aa7d4c719cec] Running
	I0717 00:47:03.698035   23443 system_pods.go:61] "etcd-ha-029113-m02" [a0f65752-ddcf-493d-bc0b-e4cb2ac8d635] Running
	I0717 00:47:03.698038   23443 system_pods.go:61] "kindnet-8xg7d" [a612c634-49ef-4357-9b36-f5cc6604bdd7] Running
	I0717 00:47:03.698041   23443 system_pods.go:61] "kindnet-k7vzq" [8198e4a4-080e-482a-a0b3-58e796bdd230] Running
	I0717 00:47:03.698044   23443 system_pods.go:61] "kube-apiserver-ha-029113" [167d337c-6406-4f80-8a60-aebdca26066b] Running
	I0717 00:47:03.698047   23443 system_pods.go:61] "kube-apiserver-ha-029113-m02" [d64aa0f0-e41f-4a5e-b4fe-48665061673e] Running
	I0717 00:47:03.698050   23443 system_pods.go:61] "kube-controller-manager-ha-029113" [8f1ee225-f6a3-4943-976a-9cc14607a654] Running
	I0717 00:47:03.698057   23443 system_pods.go:61] "kube-controller-manager-ha-029113-m02" [d180826c-b18e-49a7-8a1a-576c1a64fd51] Running
	I0717 00:47:03.698060   23443 system_pods.go:61] "kube-proxy-2wz5p" [285b947d-fa11-40fb-befa-1fa4451787d4] Running
	I0717 00:47:03.698063   23443 system_pods.go:61] "kube-proxy-hg2kp" [db9243f4-bcc0-406a-a8f2-ccdbc00f6341] Running
	I0717 00:47:03.698066   23443 system_pods.go:61] "kube-scheduler-ha-029113" [e3b5629d-5647-437e-a87c-0c91f2cd26d7] Running
	I0717 00:47:03.698068   23443 system_pods.go:61] "kube-scheduler-ha-029113-m02" [0f986464-8d17-4727-906b-4d8c58afbe5d] Running
	I0717 00:47:03.698071   23443 system_pods.go:61] "kube-vip-ha-029113" [985763eb-2a45-4820-a3db-e2af6d9291e0] Running
	I0717 00:47:03.698074   23443 system_pods.go:61] "kube-vip-ha-029113-m02" [0d64dace-cdb3-4abb-8d92-b205dc611777] Running
	I0717 00:47:03.698077   23443 system_pods.go:61] "storage-provisioner" [b9f04e5d-469e-4432-bd31-dbe772194f84] Running
	I0717 00:47:03.698082   23443 system_pods.go:74] duration metric: took 184.028654ms to wait for pod list to return data ...
	I0717 00:47:03.698092   23443 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:47:03.885527   23443 request.go:629] Waited for 187.360853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:47:03.885587   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:47:03.885592   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:03.885600   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:03.885604   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:03.888683   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:03.888912   23443 default_sa.go:45] found service account: "default"
	I0717 00:47:03.888931   23443 default_sa.go:55] duration metric: took 190.833114ms for default service account to be created ...
	I0717 00:47:03.888939   23443 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:47:04.085295   23443 request.go:629] Waited for 196.304645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:47:04.085342   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:47:04.085348   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:04.085355   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:04.085359   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:04.090365   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:47:04.094886   23443 system_pods.go:86] 17 kube-system pods found
	I0717 00:47:04.094910   23443 system_pods.go:89] "coredns-7db6d8ff4d-62m67" [5029f9dc-6792-44d9-9296-ec5ab6d72274] Running
	I0717 00:47:04.094917   23443 system_pods.go:89] "coredns-7db6d8ff4d-xdlls" [4344b971-b979-42f8-8fa8-01f2d64bb51a] Running
	I0717 00:47:04.094921   23443 system_pods.go:89] "etcd-ha-029113" [10122569-9dc1-4680-8d11-aa7d4c719cec] Running
	I0717 00:47:04.094926   23443 system_pods.go:89] "etcd-ha-029113-m02" [a0f65752-ddcf-493d-bc0b-e4cb2ac8d635] Running
	I0717 00:47:04.094932   23443 system_pods.go:89] "kindnet-8xg7d" [a612c634-49ef-4357-9b36-f5cc6604bdd7] Running
	I0717 00:47:04.094936   23443 system_pods.go:89] "kindnet-k7vzq" [8198e4a4-080e-482a-a0b3-58e796bdd230] Running
	I0717 00:47:04.094939   23443 system_pods.go:89] "kube-apiserver-ha-029113" [167d337c-6406-4f80-8a60-aebdca26066b] Running
	I0717 00:47:04.094944   23443 system_pods.go:89] "kube-apiserver-ha-029113-m02" [d64aa0f0-e41f-4a5e-b4fe-48665061673e] Running
	I0717 00:47:04.094950   23443 system_pods.go:89] "kube-controller-manager-ha-029113" [8f1ee225-f6a3-4943-976a-9cc14607a654] Running
	I0717 00:47:04.094954   23443 system_pods.go:89] "kube-controller-manager-ha-029113-m02" [d180826c-b18e-49a7-8a1a-576c1a64fd51] Running
	I0717 00:47:04.094960   23443 system_pods.go:89] "kube-proxy-2wz5p" [285b947d-fa11-40fb-befa-1fa4451787d4] Running
	I0717 00:47:04.094965   23443 system_pods.go:89] "kube-proxy-hg2kp" [db9243f4-bcc0-406a-a8f2-ccdbc00f6341] Running
	I0717 00:47:04.094971   23443 system_pods.go:89] "kube-scheduler-ha-029113" [e3b5629d-5647-437e-a87c-0c91f2cd26d7] Running
	I0717 00:47:04.094975   23443 system_pods.go:89] "kube-scheduler-ha-029113-m02" [0f986464-8d17-4727-906b-4d8c58afbe5d] Running
	I0717 00:47:04.094982   23443 system_pods.go:89] "kube-vip-ha-029113" [985763eb-2a45-4820-a3db-e2af6d9291e0] Running
	I0717 00:47:04.094985   23443 system_pods.go:89] "kube-vip-ha-029113-m02" [0d64dace-cdb3-4abb-8d92-b205dc611777] Running
	I0717 00:47:04.094989   23443 system_pods.go:89] "storage-provisioner" [b9f04e5d-469e-4432-bd31-dbe772194f84] Running
	I0717 00:47:04.094994   23443 system_pods.go:126] duration metric: took 206.051848ms to wait for k8s-apps to be running ...
	I0717 00:47:04.095003   23443 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:47:04.095042   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:47:04.110570   23443 system_svc.go:56] duration metric: took 15.558256ms WaitForService to wait for kubelet
	I0717 00:47:04.110597   23443 kubeadm.go:582] duration metric: took 24.045945789s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:47:04.110617   23443 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:47:04.286015   23443 request.go:629] Waited for 175.332019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes
	I0717 00:47:04.286074   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes
	I0717 00:47:04.286091   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:04.286098   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:04.286105   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:04.289782   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:04.290663   23443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:47:04.290685   23443 node_conditions.go:123] node cpu capacity is 2
	I0717 00:47:04.290705   23443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:47:04.290709   23443 node_conditions.go:123] node cpu capacity is 2
	I0717 00:47:04.290713   23443 node_conditions.go:105] duration metric: took 180.091395ms to run NodePressure ...
	I0717 00:47:04.290725   23443 start.go:241] waiting for startup goroutines ...
	I0717 00:47:04.290767   23443 start.go:255] writing updated cluster config ...
	I0717 00:47:04.292762   23443 out.go:177] 
	I0717 00:47:04.294297   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:47:04.294405   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:47:04.296163   23443 out.go:177] * Starting "ha-029113-m03" control-plane node in "ha-029113" cluster
	I0717 00:47:04.297425   23443 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:47:04.297446   23443 cache.go:56] Caching tarball of preloaded images
	I0717 00:47:04.297538   23443 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:47:04.297550   23443 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:47:04.297634   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:47:04.297809   23443 start.go:360] acquireMachinesLock for ha-029113-m03: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:47:04.297851   23443 start.go:364] duration metric: took 25.027µs to acquireMachinesLock for "ha-029113-m03"
	I0717 00:47:04.297867   23443 start.go:93] Provisioning new machine with config: &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:47:04.297953   23443 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0717 00:47:04.299345   23443 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 00:47:04.299455   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:47:04.299497   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:47:04.314205   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
	I0717 00:47:04.314783   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:47:04.315268   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:47:04.315290   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:47:04.315618   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:47:04.315823   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetMachineName
	I0717 00:47:04.315982   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:04.316142   23443 start.go:159] libmachine.API.Create for "ha-029113" (driver="kvm2")
	I0717 00:47:04.316175   23443 client.go:168] LocalClient.Create starting
	I0717 00:47:04.316220   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 00:47:04.316260   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:47:04.316282   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:47:04.316342   23443 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 00:47:04.316367   23443 main.go:141] libmachine: Decoding PEM data...
	I0717 00:47:04.316384   23443 main.go:141] libmachine: Parsing certificate...
	I0717 00:47:04.316409   23443 main.go:141] libmachine: Running pre-create checks...
	I0717 00:47:04.316420   23443 main.go:141] libmachine: (ha-029113-m03) Calling .PreCreateCheck
	I0717 00:47:04.316582   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetConfigRaw
	I0717 00:47:04.316987   23443 main.go:141] libmachine: Creating machine...
	I0717 00:47:04.317003   23443 main.go:141] libmachine: (ha-029113-m03) Calling .Create
	I0717 00:47:04.317147   23443 main.go:141] libmachine: (ha-029113-m03) Creating KVM machine...
	I0717 00:47:04.318346   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found existing default KVM network
	I0717 00:47:04.318500   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found existing private KVM network mk-ha-029113
	I0717 00:47:04.318661   23443 main.go:141] libmachine: (ha-029113-m03) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03 ...
	I0717 00:47:04.318684   23443 main.go:141] libmachine: (ha-029113-m03) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 00:47:04.318744   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:04.318656   24534 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:47:04.318858   23443 main.go:141] libmachine: (ha-029113-m03) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 00:47:04.534160   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:04.534009   24534 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa...
	I0717 00:47:04.597323   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:04.597226   24534 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/ha-029113-m03.rawdisk...
	I0717 00:47:04.597353   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Writing magic tar header
	I0717 00:47:04.597367   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Writing SSH key tar header
	I0717 00:47:04.597378   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:04.597333   24534 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03 ...
	I0717 00:47:04.597451   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03
	I0717 00:47:04.597482   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03 (perms=drwx------)
	I0717 00:47:04.597494   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 00:47:04.597527   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 00:47:04.597552   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 00:47:04.597566   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:47:04.597589   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 00:47:04.597602   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 00:47:04.597613   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 00:47:04.597630   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home/jenkins
	I0717 00:47:04.597643   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Checking permissions on dir: /home
	I0717 00:47:04.597664   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Skipping /home - not owner
	I0717 00:47:04.597677   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 00:47:04.597688   23443 main.go:141] libmachine: (ha-029113-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 00:47:04.597706   23443 main.go:141] libmachine: (ha-029113-m03) Creating domain...
	I0717 00:47:04.598438   23443 main.go:141] libmachine: (ha-029113-m03) define libvirt domain using xml: 
	I0717 00:47:04.598457   23443 main.go:141] libmachine: (ha-029113-m03) <domain type='kvm'>
	I0717 00:47:04.598494   23443 main.go:141] libmachine: (ha-029113-m03)   <name>ha-029113-m03</name>
	I0717 00:47:04.598522   23443 main.go:141] libmachine: (ha-029113-m03)   <memory unit='MiB'>2200</memory>
	I0717 00:47:04.598531   23443 main.go:141] libmachine: (ha-029113-m03)   <vcpu>2</vcpu>
	I0717 00:47:04.598537   23443 main.go:141] libmachine: (ha-029113-m03)   <features>
	I0717 00:47:04.598545   23443 main.go:141] libmachine: (ha-029113-m03)     <acpi/>
	I0717 00:47:04.598570   23443 main.go:141] libmachine: (ha-029113-m03)     <apic/>
	I0717 00:47:04.598595   23443 main.go:141] libmachine: (ha-029113-m03)     <pae/>
	I0717 00:47:04.598617   23443 main.go:141] libmachine: (ha-029113-m03)     
	I0717 00:47:04.598626   23443 main.go:141] libmachine: (ha-029113-m03)   </features>
	I0717 00:47:04.598638   23443 main.go:141] libmachine: (ha-029113-m03)   <cpu mode='host-passthrough'>
	I0717 00:47:04.598648   23443 main.go:141] libmachine: (ha-029113-m03)   
	I0717 00:47:04.598657   23443 main.go:141] libmachine: (ha-029113-m03)   </cpu>
	I0717 00:47:04.598668   23443 main.go:141] libmachine: (ha-029113-m03)   <os>
	I0717 00:47:04.598677   23443 main.go:141] libmachine: (ha-029113-m03)     <type>hvm</type>
	I0717 00:47:04.598695   23443 main.go:141] libmachine: (ha-029113-m03)     <boot dev='cdrom'/>
	I0717 00:47:04.598712   23443 main.go:141] libmachine: (ha-029113-m03)     <boot dev='hd'/>
	I0717 00:47:04.598726   23443 main.go:141] libmachine: (ha-029113-m03)     <bootmenu enable='no'/>
	I0717 00:47:04.598735   23443 main.go:141] libmachine: (ha-029113-m03)   </os>
	I0717 00:47:04.598744   23443 main.go:141] libmachine: (ha-029113-m03)   <devices>
	I0717 00:47:04.598752   23443 main.go:141] libmachine: (ha-029113-m03)     <disk type='file' device='cdrom'>
	I0717 00:47:04.598763   23443 main.go:141] libmachine: (ha-029113-m03)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/boot2docker.iso'/>
	I0717 00:47:04.598772   23443 main.go:141] libmachine: (ha-029113-m03)       <target dev='hdc' bus='scsi'/>
	I0717 00:47:04.598780   23443 main.go:141] libmachine: (ha-029113-m03)       <readonly/>
	I0717 00:47:04.598792   23443 main.go:141] libmachine: (ha-029113-m03)     </disk>
	I0717 00:47:04.598805   23443 main.go:141] libmachine: (ha-029113-m03)     <disk type='file' device='disk'>
	I0717 00:47:04.598817   23443 main.go:141] libmachine: (ha-029113-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 00:47:04.598834   23443 main.go:141] libmachine: (ha-029113-m03)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/ha-029113-m03.rawdisk'/>
	I0717 00:47:04.598844   23443 main.go:141] libmachine: (ha-029113-m03)       <target dev='hda' bus='virtio'/>
	I0717 00:47:04.598852   23443 main.go:141] libmachine: (ha-029113-m03)     </disk>
	I0717 00:47:04.598864   23443 main.go:141] libmachine: (ha-029113-m03)     <interface type='network'>
	I0717 00:47:04.598875   23443 main.go:141] libmachine: (ha-029113-m03)       <source network='mk-ha-029113'/>
	I0717 00:47:04.598885   23443 main.go:141] libmachine: (ha-029113-m03)       <model type='virtio'/>
	I0717 00:47:04.598898   23443 main.go:141] libmachine: (ha-029113-m03)     </interface>
	I0717 00:47:04.598914   23443 main.go:141] libmachine: (ha-029113-m03)     <interface type='network'>
	I0717 00:47:04.598925   23443 main.go:141] libmachine: (ha-029113-m03)       <source network='default'/>
	I0717 00:47:04.598932   23443 main.go:141] libmachine: (ha-029113-m03)       <model type='virtio'/>
	I0717 00:47:04.598939   23443 main.go:141] libmachine: (ha-029113-m03)     </interface>
	I0717 00:47:04.598945   23443 main.go:141] libmachine: (ha-029113-m03)     <serial type='pty'>
	I0717 00:47:04.598952   23443 main.go:141] libmachine: (ha-029113-m03)       <target port='0'/>
	I0717 00:47:04.598957   23443 main.go:141] libmachine: (ha-029113-m03)     </serial>
	I0717 00:47:04.598966   23443 main.go:141] libmachine: (ha-029113-m03)     <console type='pty'>
	I0717 00:47:04.598972   23443 main.go:141] libmachine: (ha-029113-m03)       <target type='serial' port='0'/>
	I0717 00:47:04.598977   23443 main.go:141] libmachine: (ha-029113-m03)     </console>
	I0717 00:47:04.598984   23443 main.go:141] libmachine: (ha-029113-m03)     <rng model='virtio'>
	I0717 00:47:04.598992   23443 main.go:141] libmachine: (ha-029113-m03)       <backend model='random'>/dev/random</backend>
	I0717 00:47:04.598997   23443 main.go:141] libmachine: (ha-029113-m03)     </rng>
	I0717 00:47:04.599003   23443 main.go:141] libmachine: (ha-029113-m03)     
	I0717 00:47:04.599008   23443 main.go:141] libmachine: (ha-029113-m03)     
	I0717 00:47:04.599012   23443 main.go:141] libmachine: (ha-029113-m03)   </devices>
	I0717 00:47:04.599018   23443 main.go:141] libmachine: (ha-029113-m03) </domain>
	I0717 00:47:04.599024   23443 main.go:141] libmachine: (ha-029113-m03) 
	I0717 00:47:04.605647   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:d3:7c:43 in network default
	I0717 00:47:04.606213   23443 main.go:141] libmachine: (ha-029113-m03) Ensuring networks are active...
	I0717 00:47:04.606235   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:04.606899   23443 main.go:141] libmachine: (ha-029113-m03) Ensuring network default is active
	I0717 00:47:04.607158   23443 main.go:141] libmachine: (ha-029113-m03) Ensuring network mk-ha-029113 is active
	I0717 00:47:04.607510   23443 main.go:141] libmachine: (ha-029113-m03) Getting domain xml...
	I0717 00:47:04.608189   23443 main.go:141] libmachine: (ha-029113-m03) Creating domain...
	I0717 00:47:05.845798   23443 main.go:141] libmachine: (ha-029113-m03) Waiting to get IP...
	I0717 00:47:05.846661   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:05.847143   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:05.847176   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:05.847123   24534 retry.go:31] will retry after 298.775965ms: waiting for machine to come up
	I0717 00:47:06.147576   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:06.148074   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:06.148100   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:06.148030   24534 retry.go:31] will retry after 321.272545ms: waiting for machine to come up
	I0717 00:47:06.470416   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:06.470932   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:06.470967   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:06.470880   24534 retry.go:31] will retry after 313.273746ms: waiting for machine to come up
	I0717 00:47:06.785183   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:06.785593   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:06.785618   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:06.785553   24534 retry.go:31] will retry after 599.715441ms: waiting for machine to come up
	I0717 00:47:07.387438   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:07.387895   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:07.387922   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:07.387828   24534 retry.go:31] will retry after 617.925829ms: waiting for machine to come up
	I0717 00:47:08.007558   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:08.008055   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:08.008085   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:08.008016   24534 retry.go:31] will retry after 732.559545ms: waiting for machine to come up
	I0717 00:47:08.742239   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:08.742735   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:08.742763   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:08.742690   24534 retry.go:31] will retry after 953.977069ms: waiting for machine to come up
	I0717 00:47:09.697917   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:09.698323   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:09.698349   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:09.698272   24534 retry.go:31] will retry after 956.736439ms: waiting for machine to come up
	I0717 00:47:10.656643   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:10.657148   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:10.657182   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:10.657088   24534 retry.go:31] will retry after 1.749286774s: waiting for machine to come up
	I0717 00:47:12.407663   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:12.408103   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:12.408128   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:12.408055   24534 retry.go:31] will retry after 1.683433342s: waiting for machine to come up
	I0717 00:47:14.094008   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:14.094391   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:14.094412   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:14.094367   24534 retry.go:31] will retry after 2.783450641s: waiting for machine to come up
	I0717 00:47:16.879558   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:16.879975   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:16.879998   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:16.879938   24534 retry.go:31] will retry after 2.670963884s: waiting for machine to come up
	I0717 00:47:19.552112   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:19.552483   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:19.552508   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:19.552448   24534 retry.go:31] will retry after 3.996912103s: waiting for machine to come up
	I0717 00:47:23.551675   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:23.552163   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find current IP address of domain ha-029113-m03 in network mk-ha-029113
	I0717 00:47:23.552190   23443 main.go:141] libmachine: (ha-029113-m03) DBG | I0717 00:47:23.552121   24534 retry.go:31] will retry after 4.733416289s: waiting for machine to come up
	I0717 00:47:28.290235   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.290702   23443 main.go:141] libmachine: (ha-029113-m03) Found IP for machine: 192.168.39.100
	I0717 00:47:28.290720   23443 main.go:141] libmachine: (ha-029113-m03) Reserving static IP address...
	I0717 00:47:28.290734   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has current primary IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.291086   23443 main.go:141] libmachine: (ha-029113-m03) DBG | unable to find host DHCP lease matching {name: "ha-029113-m03", mac: "52:54:00:30:b5:1d", ip: "192.168.39.100"} in network mk-ha-029113
	I0717 00:47:28.361256   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Getting to WaitForSSH function...
	I0717 00:47:28.361291   23443 main.go:141] libmachine: (ha-029113-m03) Reserved static IP address: 192.168.39.100
	I0717 00:47:28.361309   23443 main.go:141] libmachine: (ha-029113-m03) Waiting for SSH to be available...
	I0717 00:47:28.363907   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.364272   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.364291   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.364496   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Using SSH client type: external
	I0717 00:47:28.364543   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa (-rw-------)
	I0717 00:47:28.364574   23443 main.go:141] libmachine: (ha-029113-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 00:47:28.364591   23443 main.go:141] libmachine: (ha-029113-m03) DBG | About to run SSH command:
	I0717 00:47:28.364607   23443 main.go:141] libmachine: (ha-029113-m03) DBG | exit 0
	I0717 00:47:28.490532   23443 main.go:141] libmachine: (ha-029113-m03) DBG | SSH cmd err, output: <nil>: 
	I0717 00:47:28.490841   23443 main.go:141] libmachine: (ha-029113-m03) KVM machine creation complete!
	I0717 00:47:28.491108   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetConfigRaw
	I0717 00:47:28.491707   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:28.491898   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:28.492104   23443 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 00:47:28.492120   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:47:28.493332   23443 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 00:47:28.493349   23443 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 00:47:28.493363   23443 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 00:47:28.493372   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:28.495810   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.496236   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.496288   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.496385   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:28.496571   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.496733   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.496868   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:28.497033   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:28.497243   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:28.497258   23443 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 00:47:28.601770   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:47:28.601794   23443 main.go:141] libmachine: Detecting the provisioner...
	I0717 00:47:28.601803   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:28.604492   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.604842   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.604870   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.605008   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:28.605205   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.605349   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.605465   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:28.605623   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:28.605786   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:28.605798   23443 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 00:47:28.711191   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 00:47:28.711241   23443 main.go:141] libmachine: found compatible host: buildroot
	I0717 00:47:28.711248   23443 main.go:141] libmachine: Provisioning with buildroot...
	I0717 00:47:28.711255   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetMachineName
	I0717 00:47:28.711535   23443 buildroot.go:166] provisioning hostname "ha-029113-m03"
	I0717 00:47:28.711564   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetMachineName
	I0717 00:47:28.711760   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:28.714290   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.714691   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.714727   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.714899   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:28.715064   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.715231   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.715397   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:28.715566   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:28.715763   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:28.715781   23443 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-029113-m03 && echo "ha-029113-m03" | sudo tee /etc/hostname
	I0717 00:47:28.834032   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-029113-m03
	
	I0717 00:47:28.834059   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:28.836653   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.837041   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.837073   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.837227   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:28.837410   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.837571   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:28.837717   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:28.837862   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:28.838032   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:28.838048   23443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-029113-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-029113-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-029113-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:47:28.947084   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:47:28.947117   23443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 00:47:28.947131   23443 buildroot.go:174] setting up certificates
	I0717 00:47:28.947140   23443 provision.go:84] configureAuth start
	I0717 00:47:28.947149   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetMachineName
	I0717 00:47:28.947410   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:47:28.949894   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.950247   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.950271   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.950391   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:28.952445   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.952785   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:28.952811   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:28.952913   23443 provision.go:143] copyHostCerts
	I0717 00:47:28.952943   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:47:28.952982   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 00:47:28.952994   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:47:28.953074   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 00:47:28.953163   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:47:28.953187   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 00:47:28.953194   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:47:28.953233   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 00:47:28.953293   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:47:28.953315   23443 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 00:47:28.953324   23443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:47:28.953356   23443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 00:47:28.953426   23443 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.ha-029113-m03 san=[127.0.0.1 192.168.39.100 ha-029113-m03 localhost minikube]
	I0717 00:47:29.050507   23443 provision.go:177] copyRemoteCerts
	I0717 00:47:29.050585   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:47:29.050613   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:29.053185   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.053533   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.053557   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.053726   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.053901   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.054057   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.054204   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:47:29.138459   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:47:29.138522   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:47:29.162967   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:47:29.163027   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:47:29.186653   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:47:29.186730   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:47:29.209623   23443 provision.go:87] duration metric: took 262.471359ms to configureAuth
	I0717 00:47:29.209654   23443 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:47:29.209857   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:47:29.209928   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:29.212618   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.212936   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.212963   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.213136   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.213327   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.213487   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.213633   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.213780   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:29.213971   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:29.213993   23443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:47:29.481929   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:47:29.481956   23443 main.go:141] libmachine: Checking connection to Docker...
	I0717 00:47:29.481968   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetURL
	I0717 00:47:29.483185   23443 main.go:141] libmachine: (ha-029113-m03) DBG | Using libvirt version 6000000
	I0717 00:47:29.486435   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.486892   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.486923   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.487078   23443 main.go:141] libmachine: Docker is up and running!
	I0717 00:47:29.487088   23443 main.go:141] libmachine: Reticulating splines...
	I0717 00:47:29.487094   23443 client.go:171] duration metric: took 25.170910202s to LocalClient.Create
	I0717 00:47:29.487115   23443 start.go:167] duration metric: took 25.170975292s to libmachine.API.Create "ha-029113"
	I0717 00:47:29.487126   23443 start.go:293] postStartSetup for "ha-029113-m03" (driver="kvm2")
	I0717 00:47:29.487139   23443 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:47:29.487161   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:29.487395   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:47:29.487431   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:29.489957   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.490360   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.490385   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.490534   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.490730   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.490865   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.490995   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:47:29.577160   23443 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:47:29.581443   23443 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:47:29.581469   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 00:47:29.581544   23443 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 00:47:29.581652   23443 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 00:47:29.581666   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /etc/ssl/certs/112592.pem
	I0717 00:47:29.581789   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:47:29.591763   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:47:29.615917   23443 start.go:296] duration metric: took 128.779151ms for postStartSetup
	I0717 00:47:29.615972   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetConfigRaw
	I0717 00:47:29.616577   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:47:29.619288   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.619666   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.619691   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.619973   23443 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:47:29.620190   23443 start.go:128] duration metric: took 25.32222776s to createHost
	I0717 00:47:29.620213   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:29.622028   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.622319   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.622342   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.622518   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.622708   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.622870   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.622999   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.623167   23443 main.go:141] libmachine: Using SSH client type: native
	I0717 00:47:29.623330   23443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 00:47:29.623341   23443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:47:29.727385   23443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177249.704991971
	
	I0717 00:47:29.727403   23443 fix.go:216] guest clock: 1721177249.704991971
	I0717 00:47:29.727411   23443 fix.go:229] Guest: 2024-07-17 00:47:29.704991971 +0000 UTC Remote: 2024-07-17 00:47:29.620202081 +0000 UTC m=+240.022477234 (delta=84.78989ms)
	I0717 00:47:29.727429   23443 fix.go:200] guest clock delta is within tolerance: 84.78989ms
	I0717 00:47:29.727436   23443 start.go:83] releasing machines lock for "ha-029113-m03", held for 25.429576063s
	I0717 00:47:29.727468   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:29.727789   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:47:29.730318   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.730741   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.730768   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.732706   23443 out.go:177] * Found network options:
	I0717 00:47:29.734087   23443 out.go:177]   - NO_PROXY=192.168.39.95,192.168.39.166
	W0717 00:47:29.735301   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 00:47:29.735332   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:47:29.735348   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:29.735851   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:29.736040   23443 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:47:29.736114   23443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:47:29.736153   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	W0717 00:47:29.736251   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 00:47:29.736274   23443 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 00:47:29.736336   23443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:47:29.736352   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:47:29.738604   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.738817   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.739046   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.739070   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.739188   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.739311   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:29.739333   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:29.739376   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.739498   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:47:29.739580   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.739647   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:47:29.739726   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:47:29.739770   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:47:29.739875   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:47:29.970998   23443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:47:29.977841   23443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:47:29.977909   23443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:47:29.994601   23443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:47:29.994622   23443 start.go:495] detecting cgroup driver to use...
	I0717 00:47:29.994700   23443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:47:30.011004   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:47:30.024819   23443 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:47:30.024876   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:47:30.038454   23443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:47:30.052342   23443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:47:30.168997   23443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:47:30.336485   23443 docker.go:233] disabling docker service ...
	I0717 00:47:30.336553   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:47:30.351582   23443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:47:30.364131   23443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:47:30.484186   23443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:47:30.608256   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:47:30.622449   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:47:30.641842   23443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:47:30.641903   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.652041   23443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:47:30.652098   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.661887   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.671785   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.681613   23443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:47:30.692189   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.702117   23443 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.718565   23443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:47:30.728992   23443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:47:30.740257   23443 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 00:47:30.740319   23443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 00:47:30.754046   23443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:47:30.766384   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:47:30.887467   23443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:47:31.028626   23443 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:47:31.028709   23443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:47:31.034326   23443 start.go:563] Will wait 60s for crictl version
	I0717 00:47:31.034380   23443 ssh_runner.go:195] Run: which crictl
	I0717 00:47:31.038352   23443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:47:31.081500   23443 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:47:31.081582   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:47:31.112415   23443 ssh_runner.go:195] Run: crio --version
	I0717 00:47:31.143120   23443 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:47:31.144618   23443 out.go:177]   - env NO_PROXY=192.168.39.95
	I0717 00:47:31.146006   23443 out.go:177]   - env NO_PROXY=192.168.39.95,192.168.39.166
	I0717 00:47:31.147439   23443 main.go:141] libmachine: (ha-029113-m03) Calling .GetIP
	I0717 00:47:31.149878   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:31.150222   23443 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:47:31.150242   23443 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:47:31.150430   23443 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:47:31.155114   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:47:31.167558   23443 mustload.go:65] Loading cluster: ha-029113
	I0717 00:47:31.167744   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:47:31.167996   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:47:31.168025   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:47:31.183282   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0717 00:47:31.183707   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:47:31.184126   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:47:31.184140   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:47:31.184450   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:47:31.184627   23443 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:47:31.186188   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:47:31.186503   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:47:31.186534   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:47:31.200721   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0717 00:47:31.201125   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:47:31.201501   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:47:31.201522   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:47:31.201800   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:47:31.201960   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:47:31.202110   23443 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113 for IP: 192.168.39.100
	I0717 00:47:31.202122   23443 certs.go:194] generating shared ca certs ...
	I0717 00:47:31.202137   23443 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:47:31.202283   23443 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 00:47:31.202327   23443 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 00:47:31.202339   23443 certs.go:256] generating profile certs ...
	I0717 00:47:31.202432   23443 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key
	I0717 00:47:31.202464   23443 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.365995e4
	I0717 00:47:31.202483   23443 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.365995e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.95 192.168.39.166 192.168.39.100 192.168.39.254]
	I0717 00:47:31.392167   23443 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.365995e4 ...
	I0717 00:47:31.392197   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.365995e4: {Name:mk26a48a79f686a9e1a613e3ea8d71075ef49720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:47:31.392355   23443 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.365995e4 ...
	I0717 00:47:31.392368   23443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.365995e4: {Name:mk416a12e41b00c2f47831d1494d44e481bc26ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:47:31.392446   23443 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.365995e4 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt
	I0717 00:47:31.392577   23443 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.365995e4 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key
	I0717 00:47:31.392696   23443 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key
	I0717 00:47:31.392710   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:47:31.392722   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:47:31.392740   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:47:31.392753   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:47:31.392764   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:47:31.392776   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:47:31.392789   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:47:31.392800   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:47:31.392843   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 00:47:31.392868   23443 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 00:47:31.392877   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:47:31.392898   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:47:31.392918   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:47:31.392938   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 00:47:31.392972   23443 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:47:31.392995   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem -> /usr/share/ca-certificates/11259.pem
	I0717 00:47:31.393009   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /usr/share/ca-certificates/112592.pem
	I0717 00:47:31.393021   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:47:31.393047   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:47:31.395968   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:47:31.396353   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:47:31.396374   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:47:31.396544   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:47:31.396748   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:47:31.396892   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:47:31.397011   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:47:31.467015   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 00:47:31.472607   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 00:47:31.484769   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 00:47:31.488879   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 00:47:31.500546   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 00:47:31.504755   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 00:47:31.521100   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 00:47:31.529522   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 00:47:31.544067   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 00:47:31.548468   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 00:47:31.559844   23443 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 00:47:31.564482   23443 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 00:47:31.575658   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:47:31.603663   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:47:31.629107   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:47:31.652484   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:47:31.677959   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0717 00:47:31.700927   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:47:31.728471   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:47:31.752347   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:47:31.783749   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 00:47:31.809217   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 00:47:31.833961   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:47:31.856783   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 00:47:31.872709   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 00:47:31.888653   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 00:47:31.904254   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 00:47:31.920382   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 00:47:31.937130   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 00:47:31.956467   23443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 00:47:31.975090   23443 ssh_runner.go:195] Run: openssl version
	I0717 00:47:31.981626   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 00:47:31.993439   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 00:47:31.997968   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 00:47:31.998015   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 00:47:32.003758   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 00:47:32.014696   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 00:47:32.026310   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 00:47:32.030909   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 00:47:32.030964   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 00:47:32.036404   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:47:32.047633   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:47:32.059069   23443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:47:32.063777   23443 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:47:32.063824   23443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:47:32.069764   23443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:47:32.080802   23443 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:47:32.084841   23443 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:47:32.084892   23443 kubeadm.go:934] updating node {m03 192.168.39.100 8443 v1.30.2 crio true true} ...
	I0717 00:47:32.084991   23443 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-029113-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:47:32.085021   23443 kube-vip.go:115] generating kube-vip config ...
	I0717 00:47:32.085058   23443 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:47:32.102888   23443 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:47:32.102959   23443 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:47:32.103026   23443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:47:32.113632   23443 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 00:47:32.113689   23443 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 00:47:32.123871   23443 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0717 00:47:32.123871   23443 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0717 00:47:32.123897   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:47:32.123886   23443 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 00:47:32.123931   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:47:32.123940   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:47:32.124004   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 00:47:32.124006   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 00:47:32.144573   23443 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:47:32.144592   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 00:47:32.144615   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 00:47:32.144670   23443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 00:47:32.144668   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 00:47:32.144728   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 00:47:32.176688   23443 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 00:47:32.176741   23443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 00:47:33.056190   23443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 00:47:33.065912   23443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 00:47:33.082651   23443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:47:33.102312   23443 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:47:33.121683   23443 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:47:33.125753   23443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:47:33.138778   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:47:33.274974   23443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:47:33.293487   23443 host.go:66] Checking if "ha-029113" exists ...
	I0717 00:47:33.293852   23443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:47:33.293891   23443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:47:33.311122   23443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I0717 00:47:33.311526   23443 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:47:33.311975   23443 main.go:141] libmachine: Using API Version  1
	I0717 00:47:33.312002   23443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:47:33.312300   23443 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:47:33.312467   23443 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:47:33.312581   23443 start.go:317] joinCluster: &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:47:33.312738   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 00:47:33.312757   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:47:33.315444   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:47:33.315846   23443 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:47:33.315876   23443 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:47:33.316004   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:47:33.316178   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:47:33.316334   23443 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:47:33.316464   23443 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:47:33.479247   23443 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:47:33.479311   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mff843.ivzjp3mgt4opug4n --discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-029113-m03 --control-plane --apiserver-advertise-address=192.168.39.100 --apiserver-bind-port=8443"
	I0717 00:47:56.957281   23443 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mff843.ivzjp3mgt4opug4n --discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-029113-m03 --control-plane --apiserver-advertise-address=192.168.39.100 --apiserver-bind-port=8443": (23.477950581s)
	I0717 00:47:56.957310   23443 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 00:47:57.410535   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-029113-m03 minikube.k8s.io/updated_at=2024_07_17T00_47_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=ha-029113 minikube.k8s.io/primary=false
	I0717 00:47:57.567951   23443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-029113-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 00:47:57.677442   23443 start.go:319] duration metric: took 24.364856951s to joinCluster
	I0717 00:47:57.677512   23443 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:47:57.677937   23443 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:47:57.679198   23443 out.go:177] * Verifying Kubernetes components...
	I0717 00:47:57.680680   23443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:47:57.902672   23443 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:47:57.932057   23443 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:47:57.932409   23443 kapi.go:59] client config for ha-029113: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.crt", KeyFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key", CAFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 00:47:57.932505   23443 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.95:8443
	I0717 00:47:57.932785   23443 node_ready.go:35] waiting up to 6m0s for node "ha-029113-m03" to be "Ready" ...
	I0717 00:47:57.932873   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:47:57.932884   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:57.932894   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:57.932904   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:57.936371   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:58.433543   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:47:58.433563   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:58.433572   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:58.433577   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:58.437308   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:58.933289   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:47:58.933308   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:58.933316   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:58.933320   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:58.936300   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:47:59.433871   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:47:59.433895   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:59.433906   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:59.433912   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:59.437298   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:59.933021   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:47:59.933049   23443 round_trippers.go:469] Request Headers:
	I0717 00:47:59.933060   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:47:59.933066   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:47:59.936551   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:47:59.937335   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:00.433793   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:00.433814   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:00.433822   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:00.433827   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:00.436719   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:00.932935   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:00.932955   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:00.932962   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:00.932968   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:00.936258   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:01.433143   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:01.433162   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:01.433170   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:01.433176   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:01.436031   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:01.933567   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:01.933591   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:01.933603   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:01.933609   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:01.936349   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:02.433663   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:02.433684   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:02.433691   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:02.433697   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:02.437496   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:02.438053   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:02.933029   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:02.933049   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:02.933057   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:02.933062   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:02.936712   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:03.433955   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:03.433976   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:03.433991   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:03.433995   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:03.496498   23443 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I0717 00:48:03.933003   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:03.933019   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:03.933026   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:03.933030   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:03.941564   23443 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:48:04.433937   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:04.433965   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:04.433977   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:04.433989   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:04.436860   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:04.933678   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:04.933698   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:04.933705   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:04.933710   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:04.936447   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:04.936962   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:05.433337   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:05.433357   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:05.433365   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:05.433369   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:05.436408   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:05.933306   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:05.933326   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:05.933334   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:05.933337   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:05.936312   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:06.433048   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:06.433073   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:06.433084   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:06.433088   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:06.436215   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:06.933678   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:06.933701   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:06.933709   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:06.933715   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:06.936895   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:06.937759   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:07.433553   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:07.433588   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:07.433598   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:07.433603   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:07.436915   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:07.934007   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:07.934032   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:07.934043   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:07.934048   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:07.936894   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:08.433276   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:08.433306   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:08.433317   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:08.433322   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:08.436327   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:08.932987   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:08.933013   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:08.933025   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:08.933030   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:08.936170   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:09.433451   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:09.433471   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:09.433479   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:09.433482   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:09.436367   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:09.436924   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:09.933014   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:09.933035   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:09.933042   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:09.933046   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:09.936948   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:10.433011   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:10.433044   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:10.433052   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:10.433057   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:10.435847   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:10.933063   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:10.933083   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:10.933090   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:10.933095   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:10.936799   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:11.433940   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:11.433965   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:11.433974   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:11.433984   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:11.437030   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:11.437574   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:11.933479   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:11.933498   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:11.933507   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:11.933511   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:11.936963   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:12.433677   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:12.433698   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:12.433706   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:12.433708   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:12.436924   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:12.933778   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:12.933800   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:12.933806   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:12.933811   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:12.936870   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:13.433423   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:13.433448   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:13.433458   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:13.433463   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:13.436764   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:13.437713   23443 node_ready.go:53] node "ha-029113-m03" has status "Ready":"False"
	I0717 00:48:13.932967   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:13.932994   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:13.933002   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:13.933005   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:13.935973   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:14.433679   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:14.433706   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:14.433718   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:14.433724   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:14.436962   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:14.933360   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:14.933382   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:14.933393   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:14.933400   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:14.936409   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.433574   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:15.433595   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.433602   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.433607   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.436779   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:15.933878   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:15.933903   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.933913   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.933927   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.937273   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:15.938065   23443 node_ready.go:49] node "ha-029113-m03" has status "Ready":"True"
	I0717 00:48:15.938081   23443 node_ready.go:38] duration metric: took 18.00527454s for node "ha-029113-m03" to be "Ready" ...
	I0717 00:48:15.938088   23443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:48:15.938152   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:48:15.938163   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.938170   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.938174   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.946231   23443 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:48:15.953641   23443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.953724   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-62m67
	I0717 00:48:15.953740   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.953749   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.953756   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.956529   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.957165   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:15.957180   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.957188   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.957192   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.959884   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.960571   23443 pod_ready.go:92] pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:15.960588   23443 pod_ready.go:81] duration metric: took 6.922784ms for pod "coredns-7db6d8ff4d-62m67" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.960597   23443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.960646   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xdlls
	I0717 00:48:15.960652   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.960660   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.960667   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.963898   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:15.964669   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:15.964687   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.964696   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.964700   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.967035   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.967677   23443 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:15.967697   23443 pod_ready.go:81] duration metric: took 7.091028ms for pod "coredns-7db6d8ff4d-xdlls" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.967709   23443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.967769   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113
	I0717 00:48:15.967779   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.967786   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.967790   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.970615   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.971077   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:15.971090   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.971095   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.971099   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.973869   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:15.974732   23443 pod_ready.go:92] pod "etcd-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:15.974748   23443 pod_ready.go:81] duration metric: took 7.032362ms for pod "etcd-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.974757   23443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.974806   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113-m02
	I0717 00:48:15.974813   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.974820   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.974824   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.978355   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:15.979508   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:15.979523   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:15.979533   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:15.979539   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:15.983040   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:15.983610   23443 pod_ready.go:92] pod "etcd-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:15.983628   23443 pod_ready.go:81] duration metric: took 8.864021ms for pod "etcd-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:15.983641   23443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:16.134856   23443 request.go:629] Waited for 151.156525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113-m03
	I0717 00:48:16.134906   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/etcd-ha-029113-m03
	I0717 00:48:16.134910   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:16.134918   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:16.134922   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:16.138241   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:16.334752   23443 request.go:629] Waited for 195.779503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:16.334831   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:16.334841   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:16.334852   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:16.334861   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:16.338029   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:16.338660   23443 pod_ready.go:92] pod "etcd-ha-029113-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:16.338700   23443 pod_ready.go:81] duration metric: took 355.052268ms for pod "etcd-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:16.338727   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:16.534754   23443 request.go:629] Waited for 195.96079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113
	I0717 00:48:16.534812   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113
	I0717 00:48:16.534827   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:16.534837   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:16.534841   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:16.538043   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:16.734454   23443 request.go:629] Waited for 195.445196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:16.734510   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:16.734515   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:16.734524   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:16.734527   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:16.737294   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:16.737771   23443 pod_ready.go:92] pod "kube-apiserver-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:16.737788   23443 pod_ready.go:81] duration metric: took 399.053607ms for pod "kube-apiserver-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:16.737799   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:16.934908   23443 request.go:629] Waited for 197.024735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m02
	I0717 00:48:16.934979   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m02
	I0717 00:48:16.934990   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:16.935001   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:16.935013   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:16.938584   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:17.134573   23443 request.go:629] Waited for 195.314524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:17.134637   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:17.134643   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:17.134650   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:17.134653   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:17.137787   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:17.138238   23443 pod_ready.go:92] pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:17.138256   23443 pod_ready.go:81] duration metric: took 400.449501ms for pod "kube-apiserver-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:17.138264   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:17.334784   23443 request.go:629] Waited for 196.459661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m03
	I0717 00:48:17.334846   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-029113-m03
	I0717 00:48:17.334853   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:17.334865   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:17.334873   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:17.338260   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:17.534348   23443 request.go:629] Waited for 195.283689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:17.534394   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:17.534399   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:17.534406   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:17.534410   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:17.538851   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:48:17.540642   23443 pod_ready.go:92] pod "kube-apiserver-ha-029113-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:17.540666   23443 pod_ready.go:81] duration metric: took 402.39493ms for pod "kube-apiserver-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:17.540680   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:17.733930   23443 request.go:629] Waited for 193.162359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113
	I0717 00:48:17.733981   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113
	I0717 00:48:17.733986   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:17.733995   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:17.734000   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:17.737429   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:17.934571   23443 request.go:629] Waited for 196.349148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:17.934634   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:17.934642   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:17.934653   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:17.934660   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:17.937910   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:17.938631   23443 pod_ready.go:92] pod "kube-controller-manager-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:17.938651   23443 pod_ready.go:81] duration metric: took 397.960924ms for pod "kube-controller-manager-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:17.938663   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:18.134744   23443 request.go:629] Waited for 196.013809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m02
	I0717 00:48:18.134819   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m02
	I0717 00:48:18.134828   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:18.134836   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:18.134843   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:18.137845   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:18.334771   23443 request.go:629] Waited for 196.387557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:18.334820   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:18.334825   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:18.334833   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:18.334836   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:18.337818   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:18.338412   23443 pod_ready.go:92] pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:18.338427   23443 pod_ready.go:81] duration metric: took 399.756138ms for pod "kube-controller-manager-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:18.338436   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:18.534522   23443 request.go:629] Waited for 196.008108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m03
	I0717 00:48:18.534608   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-029113-m03
	I0717 00:48:18.534619   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:18.534630   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:18.534641   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:18.538673   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:48:18.734948   23443 request.go:629] Waited for 195.373322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:18.735011   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:18.735016   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:18.735023   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:18.735028   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:18.738034   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:18.738747   23443 pod_ready.go:92] pod "kube-controller-manager-ha-029113-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:18.738767   23443 pod_ready.go:81] duration metric: took 400.324386ms for pod "kube-controller-manager-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:18.738781   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2wz5p" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:18.934691   23443 request.go:629] Waited for 195.853895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wz5p
	I0717 00:48:18.934774   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2wz5p
	I0717 00:48:18.934785   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:18.934795   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:18.934801   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:18.940740   23443 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:48:19.134703   23443 request.go:629] Waited for 193.293844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:19.134771   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:19.134777   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:19.134789   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:19.134797   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:19.137684   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:19.138254   23443 pod_ready.go:92] pod "kube-proxy-2wz5p" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:19.138271   23443 pod_ready.go:81] duration metric: took 399.483256ms for pod "kube-proxy-2wz5p" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:19.138285   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hg2kp" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:19.334778   23443 request.go:629] Waited for 196.413518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2kp
	I0717 00:48:19.334827   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg2kp
	I0717 00:48:19.334834   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:19.334845   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:19.334852   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:19.337998   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:19.534911   23443 request.go:629] Waited for 196.20071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:19.534980   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:19.534993   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:19.535001   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:19.535006   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:19.538570   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:19.539013   23443 pod_ready.go:92] pod "kube-proxy-hg2kp" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:19.539030   23443 pod_ready.go:81] duration metric: took 400.733974ms for pod "kube-proxy-hg2kp" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:19.539042   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pfdt9" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:19.734427   23443 request.go:629] Waited for 195.31365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfdt9
	I0717 00:48:19.734520   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pfdt9
	I0717 00:48:19.734530   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:19.734541   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:19.734565   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:19.737680   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:19.934615   23443 request.go:629] Waited for 196.257151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:19.934694   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:19.934703   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:19.934710   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:19.934717   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:19.937593   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:19.938204   23443 pod_ready.go:92] pod "kube-proxy-pfdt9" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:19.938223   23443 pod_ready.go:81] duration metric: took 399.17404ms for pod "kube-proxy-pfdt9" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:19.938234   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:20.134299   23443 request.go:629] Waited for 196.005753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113
	I0717 00:48:20.134363   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113
	I0717 00:48:20.134370   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:20.134379   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:20.134390   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:20.137348   23443 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:48:20.334243   23443 request.go:629] Waited for 196.346653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:20.334302   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113
	I0717 00:48:20.334306   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:20.334313   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:20.334319   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:20.339195   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:48:20.339879   23443 pod_ready.go:92] pod "kube-scheduler-ha-029113" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:20.339896   23443 pod_ready.go:81] duration metric: took 401.652936ms for pod "kube-scheduler-ha-029113" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:20.339909   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:20.533935   23443 request.go:629] Waited for 193.946219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m02
	I0717 00:48:20.533986   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m02
	I0717 00:48:20.533993   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:20.534003   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:20.534008   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:20.537862   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:20.734575   23443 request.go:629] Waited for 196.172224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:20.734623   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m02
	I0717 00:48:20.734628   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:20.734635   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:20.734640   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:20.737654   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:20.738134   23443 pod_ready.go:92] pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:20.738149   23443 pod_ready.go:81] duration metric: took 398.233343ms for pod "kube-scheduler-ha-029113-m02" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:20.738158   23443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:20.934255   23443 request.go:629] Waited for 196.021247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m03
	I0717 00:48:20.934308   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-029113-m03
	I0717 00:48:20.934313   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:20.934321   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:20.934325   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:20.937565   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:21.134455   23443 request.go:629] Waited for 196.219116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:21.134502   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes/ha-029113-m03
	I0717 00:48:21.134507   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.134514   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.134517   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.137844   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:21.138357   23443 pod_ready.go:92] pod "kube-scheduler-ha-029113-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 00:48:21.138372   23443 pod_ready.go:81] duration metric: took 400.207669ms for pod "kube-scheduler-ha-029113-m03" in "kube-system" namespace to be "Ready" ...
	I0717 00:48:21.138383   23443 pod_ready.go:38] duration metric: took 5.200283607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:48:21.138400   23443 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:48:21.138452   23443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:48:21.154541   23443 api_server.go:72] duration metric: took 23.476994283s to wait for apiserver process to appear ...
	I0717 00:48:21.154580   23443 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:48:21.154600   23443 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0717 00:48:21.160502   23443 api_server.go:279] https://192.168.39.95:8443/healthz returned 200:
	ok
	I0717 00:48:21.160577   23443 round_trippers.go:463] GET https://192.168.39.95:8443/version
	I0717 00:48:21.160589   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.160599   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.160608   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.161473   23443 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 00:48:21.161539   23443 api_server.go:141] control plane version: v1.30.2
	I0717 00:48:21.161556   23443 api_server.go:131] duration metric: took 6.970001ms to wait for apiserver health ...
	I0717 00:48:21.161563   23443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:48:21.334734   23443 request.go:629] Waited for 173.026013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:48:21.334795   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:48:21.334803   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.334813   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.334823   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.341100   23443 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:48:21.348589   23443 system_pods.go:59] 24 kube-system pods found
	I0717 00:48:21.348614   23443 system_pods.go:61] "coredns-7db6d8ff4d-62m67" [5029f9dc-6792-44d9-9296-ec5ab6d72274] Running
	I0717 00:48:21.348620   23443 system_pods.go:61] "coredns-7db6d8ff4d-xdlls" [4344b971-b979-42f8-8fa8-01f2d64bb51a] Running
	I0717 00:48:21.348625   23443 system_pods.go:61] "etcd-ha-029113" [10122569-9dc1-4680-8d11-aa7d4c719cec] Running
	I0717 00:48:21.348632   23443 system_pods.go:61] "etcd-ha-029113-m02" [a0f65752-ddcf-493d-bc0b-e4cb2ac8d635] Running
	I0717 00:48:21.348637   23443 system_pods.go:61] "etcd-ha-029113-m03" [9afc47a1-ab83-4976-bd8b-d40aa6360f2d] Running
	I0717 00:48:21.348643   23443 system_pods.go:61] "kindnet-8xg7d" [a612c634-49ef-4357-9b36-f5cc6604bdd7] Running
	I0717 00:48:21.348648   23443 system_pods.go:61] "kindnet-k2jgh" [8a8e5ffe-9541-4736-9584-b49727b4753e] Running
	I0717 00:48:21.348654   23443 system_pods.go:61] "kindnet-k7vzq" [8198e4a4-080e-482a-a0b3-58e796bdd230] Running
	I0717 00:48:21.348659   23443 system_pods.go:61] "kube-apiserver-ha-029113" [167d337c-6406-4f80-8a60-aebdca26066b] Running
	I0717 00:48:21.348668   23443 system_pods.go:61] "kube-apiserver-ha-029113-m02" [d64aa0f0-e41f-4a5e-b4fe-48665061673e] Running
	I0717 00:48:21.348673   23443 system_pods.go:61] "kube-apiserver-ha-029113-m03" [0b4ea48e-60dc-44ed-8d5d-1159f866bc24] Running
	I0717 00:48:21.348684   23443 system_pods.go:61] "kube-controller-manager-ha-029113" [8f1ee225-f6a3-4943-976a-9cc14607a654] Running
	I0717 00:48:21.348692   23443 system_pods.go:61] "kube-controller-manager-ha-029113-m02" [d180826c-b18e-49a7-8a1a-576c1a64fd51] Running
	I0717 00:48:21.348698   23443 system_pods.go:61] "kube-controller-manager-ha-029113-m03" [993c477b-441b-46a1-85b8-c8ba74df2f80] Running
	I0717 00:48:21.348706   23443 system_pods.go:61] "kube-proxy-2wz5p" [285b947d-fa11-40fb-befa-1fa4451787d4] Running
	I0717 00:48:21.348712   23443 system_pods.go:61] "kube-proxy-hg2kp" [db9243f4-bcc0-406a-a8f2-ccdbc00f6341] Running
	I0717 00:48:21.348719   23443 system_pods.go:61] "kube-proxy-pfdt9" [d5f82192-14de-46c6-b3f4-38d34b9e828a] Running
	I0717 00:48:21.348724   23443 system_pods.go:61] "kube-scheduler-ha-029113" [e3b5629d-5647-437e-a87c-0c91f2cd26d7] Running
	I0717 00:48:21.348729   23443 system_pods.go:61] "kube-scheduler-ha-029113-m02" [0f986464-8d17-4727-906b-4d8c58afbe5d] Running
	I0717 00:48:21.348734   23443 system_pods.go:61] "kube-scheduler-ha-029113-m03" [8a322ad0-c9fa-4586-9051-5b18efa5a9c0] Running
	I0717 00:48:21.348741   23443 system_pods.go:61] "kube-vip-ha-029113" [985763eb-2a45-4820-a3db-e2af6d9291e0] Running
	I0717 00:48:21.348746   23443 system_pods.go:61] "kube-vip-ha-029113-m02" [0d64dace-cdb3-4abb-8d92-b205dc611777] Running
	I0717 00:48:21.348750   23443 system_pods.go:61] "kube-vip-ha-029113-m03" [ca077479-311a-4e1a-b143-55678a21f744] Running
	I0717 00:48:21.348757   23443 system_pods.go:61] "storage-provisioner" [b9f04e5d-469e-4432-bd31-dbe772194f84] Running
	I0717 00:48:21.348765   23443 system_pods.go:74] duration metric: took 187.193375ms to wait for pod list to return data ...
	I0717 00:48:21.348778   23443 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:48:21.534176   23443 request.go:629] Waited for 185.334842ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:48:21.534269   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/default/serviceaccounts
	I0717 00:48:21.534278   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.534285   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.534289   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.538916   23443 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:48:21.539036   23443 default_sa.go:45] found service account: "default"
	I0717 00:48:21.539052   23443 default_sa.go:55] duration metric: took 190.266774ms for default service account to be created ...
	I0717 00:48:21.539063   23443 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:48:21.734616   23443 request.go:629] Waited for 195.483278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:48:21.734687   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/namespaces/kube-system/pods
	I0717 00:48:21.734695   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.734702   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.734707   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.743367   23443 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:48:21.749735   23443 system_pods.go:86] 24 kube-system pods found
	I0717 00:48:21.749760   23443 system_pods.go:89] "coredns-7db6d8ff4d-62m67" [5029f9dc-6792-44d9-9296-ec5ab6d72274] Running
	I0717 00:48:21.749767   23443 system_pods.go:89] "coredns-7db6d8ff4d-xdlls" [4344b971-b979-42f8-8fa8-01f2d64bb51a] Running
	I0717 00:48:21.749771   23443 system_pods.go:89] "etcd-ha-029113" [10122569-9dc1-4680-8d11-aa7d4c719cec] Running
	I0717 00:48:21.749777   23443 system_pods.go:89] "etcd-ha-029113-m02" [a0f65752-ddcf-493d-bc0b-e4cb2ac8d635] Running
	I0717 00:48:21.749782   23443 system_pods.go:89] "etcd-ha-029113-m03" [9afc47a1-ab83-4976-bd8b-d40aa6360f2d] Running
	I0717 00:48:21.749788   23443 system_pods.go:89] "kindnet-8xg7d" [a612c634-49ef-4357-9b36-f5cc6604bdd7] Running
	I0717 00:48:21.749794   23443 system_pods.go:89] "kindnet-k2jgh" [8a8e5ffe-9541-4736-9584-b49727b4753e] Running
	I0717 00:48:21.749800   23443 system_pods.go:89] "kindnet-k7vzq" [8198e4a4-080e-482a-a0b3-58e796bdd230] Running
	I0717 00:48:21.749809   23443 system_pods.go:89] "kube-apiserver-ha-029113" [167d337c-6406-4f80-8a60-aebdca26066b] Running
	I0717 00:48:21.749815   23443 system_pods.go:89] "kube-apiserver-ha-029113-m02" [d64aa0f0-e41f-4a5e-b4fe-48665061673e] Running
	I0717 00:48:21.749825   23443 system_pods.go:89] "kube-apiserver-ha-029113-m03" [0b4ea48e-60dc-44ed-8d5d-1159f866bc24] Running
	I0717 00:48:21.749830   23443 system_pods.go:89] "kube-controller-manager-ha-029113" [8f1ee225-f6a3-4943-976a-9cc14607a654] Running
	I0717 00:48:21.749835   23443 system_pods.go:89] "kube-controller-manager-ha-029113-m02" [d180826c-b18e-49a7-8a1a-576c1a64fd51] Running
	I0717 00:48:21.749841   23443 system_pods.go:89] "kube-controller-manager-ha-029113-m03" [993c477b-441b-46a1-85b8-c8ba74df2f80] Running
	I0717 00:48:21.749845   23443 system_pods.go:89] "kube-proxy-2wz5p" [285b947d-fa11-40fb-befa-1fa4451787d4] Running
	I0717 00:48:21.749852   23443 system_pods.go:89] "kube-proxy-hg2kp" [db9243f4-bcc0-406a-a8f2-ccdbc00f6341] Running
	I0717 00:48:21.749856   23443 system_pods.go:89] "kube-proxy-pfdt9" [d5f82192-14de-46c6-b3f4-38d34b9e828a] Running
	I0717 00:48:21.749861   23443 system_pods.go:89] "kube-scheduler-ha-029113" [e3b5629d-5647-437e-a87c-0c91f2cd26d7] Running
	I0717 00:48:21.749866   23443 system_pods.go:89] "kube-scheduler-ha-029113-m02" [0f986464-8d17-4727-906b-4d8c58afbe5d] Running
	I0717 00:48:21.749871   23443 system_pods.go:89] "kube-scheduler-ha-029113-m03" [8a322ad0-c9fa-4586-9051-5b18efa5a9c0] Running
	I0717 00:48:21.749876   23443 system_pods.go:89] "kube-vip-ha-029113" [985763eb-2a45-4820-a3db-e2af6d9291e0] Running
	I0717 00:48:21.749881   23443 system_pods.go:89] "kube-vip-ha-029113-m02" [0d64dace-cdb3-4abb-8d92-b205dc611777] Running
	I0717 00:48:21.749886   23443 system_pods.go:89] "kube-vip-ha-029113-m03" [ca077479-311a-4e1a-b143-55678a21f744] Running
	I0717 00:48:21.749894   23443 system_pods.go:89] "storage-provisioner" [b9f04e5d-469e-4432-bd31-dbe772194f84] Running
	I0717 00:48:21.749905   23443 system_pods.go:126] duration metric: took 210.833721ms to wait for k8s-apps to be running ...
	I0717 00:48:21.749918   23443 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:48:21.749962   23443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:48:21.764295   23443 system_svc.go:56] duration metric: took 14.372456ms WaitForService to wait for kubelet
	I0717 00:48:21.764316   23443 kubeadm.go:582] duration metric: took 24.086772769s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:48:21.764331   23443 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:48:21.934745   23443 request.go:629] Waited for 170.341169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.95:8443/api/v1/nodes
	I0717 00:48:21.934808   23443 round_trippers.go:463] GET https://192.168.39.95:8443/api/v1/nodes
	I0717 00:48:21.934815   23443 round_trippers.go:469] Request Headers:
	I0717 00:48:21.934826   23443 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:48:21.934834   23443 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 00:48:21.938182   23443 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:48:21.939223   23443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:48:21.939242   23443 node_conditions.go:123] node cpu capacity is 2
	I0717 00:48:21.939252   23443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:48:21.939256   23443 node_conditions.go:123] node cpu capacity is 2
	I0717 00:48:21.939262   23443 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 00:48:21.939265   23443 node_conditions.go:123] node cpu capacity is 2
	I0717 00:48:21.939269   23443 node_conditions.go:105] duration metric: took 174.93377ms to run NodePressure ...
	I0717 00:48:21.939279   23443 start.go:241] waiting for startup goroutines ...
	I0717 00:48:21.939298   23443 start.go:255] writing updated cluster config ...
	I0717 00:48:21.939565   23443 ssh_runner.go:195] Run: rm -f paused
	I0717 00:48:21.989260   23443 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:48:21.991460   23443 out.go:177] * Done! kubectl is now configured to use "ha-029113" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.608524722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177581608500234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15668e37-d9da-4f73-b6be-cdda3d5e0856 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.609039800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c4235f3-626c-44d1-9413-68c3f1c46d85 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.609225752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c4235f3-626c-44d1-9413-68c3f1c46d85 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.609471751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177307303856239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba7b13f793c3a06d8fbfe335c9983449c36750149da746239b5d30f43f9e80d,PodSandboxId:50f310bc4d109b91ab1bdfc5a369ea5b936bcdfab1c3c3e494b33bc91202cdf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177077814345806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077742709925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077777210066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-67
92-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CO
NTAINER_RUNNING,CreatedAt:1721177065689728330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172117706
0624674491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b360fa2cf3fbbfbc9242efc6915192666c7dd6e4307f909d862b874fbaab69,PodSandboxId:bc3706b14039859f793eac4e8624e7818234de71ca60cf085454b03586bf9d2a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17211770460
87746328,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7182fcebcafa632e8046b9a13a66b9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177041034207282,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177040986161344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425a9fc13cce841865da7956f1f32a375623313826ea8da126557d78f754b28c,PodSandboxId:42a4c594e59973a4c2533efc093434e5573531710ce6a4ecdc3cc9ed647d8158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177041005671448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0,PodSandboxId:ab2a446417d15b6e99d85160ca9be5ee4aa3f76cd5af1ddc919812f3b8e304ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177040980708332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c4235f3-626c-44d1-9413-68c3f1c46d85 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.646536496Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=298ba576-fcdc-4bdc-be3f-bc2fcd342489 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.646625181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=298ba576-fcdc-4bdc-be3f-bc2fcd342489 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.647774097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fc944ca-eb1c-4922-971d-3bd8ba267533 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.648339923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177581648317033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fc944ca-eb1c-4922-971d-3bd8ba267533 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.649275935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bba2ec79-099d-4f57-a792-2fea0a836a93 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.649347697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bba2ec79-099d-4f57-a792-2fea0a836a93 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.649603302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177307303856239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba7b13f793c3a06d8fbfe335c9983449c36750149da746239b5d30f43f9e80d,PodSandboxId:50f310bc4d109b91ab1bdfc5a369ea5b936bcdfab1c3c3e494b33bc91202cdf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177077814345806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077742709925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077777210066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-67
92-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CO
NTAINER_RUNNING,CreatedAt:1721177065689728330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172117706
0624674491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b360fa2cf3fbbfbc9242efc6915192666c7dd6e4307f909d862b874fbaab69,PodSandboxId:bc3706b14039859f793eac4e8624e7818234de71ca60cf085454b03586bf9d2a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17211770460
87746328,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7182fcebcafa632e8046b9a13a66b9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177041034207282,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177040986161344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425a9fc13cce841865da7956f1f32a375623313826ea8da126557d78f754b28c,PodSandboxId:42a4c594e59973a4c2533efc093434e5573531710ce6a4ecdc3cc9ed647d8158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177041005671448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0,PodSandboxId:ab2a446417d15b6e99d85160ca9be5ee4aa3f76cd5af1ddc919812f3b8e304ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177040980708332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bba2ec79-099d-4f57-a792-2fea0a836a93 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.687687490Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa88c1e1-bc5c-44f1-bb4f-efbf9ce67693 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.687774413Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa88c1e1-bc5c-44f1-bb4f-efbf9ce67693 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.688879432Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d5bcb85-62d9-4992-931c-839d7e25ff77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.689325118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177581689303909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d5bcb85-62d9-4992-931c-839d7e25ff77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.690055265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f38e86c3-5d00-43a2-9e93-48e1a3d7caf0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.690134570Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f38e86c3-5d00-43a2-9e93-48e1a3d7caf0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.690419242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177307303856239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba7b13f793c3a06d8fbfe335c9983449c36750149da746239b5d30f43f9e80d,PodSandboxId:50f310bc4d109b91ab1bdfc5a369ea5b936bcdfab1c3c3e494b33bc91202cdf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177077814345806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077742709925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077777210066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-67
92-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CO
NTAINER_RUNNING,CreatedAt:1721177065689728330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172117706
0624674491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b360fa2cf3fbbfbc9242efc6915192666c7dd6e4307f909d862b874fbaab69,PodSandboxId:bc3706b14039859f793eac4e8624e7818234de71ca60cf085454b03586bf9d2a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17211770460
87746328,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7182fcebcafa632e8046b9a13a66b9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177041034207282,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177040986161344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425a9fc13cce841865da7956f1f32a375623313826ea8da126557d78f754b28c,PodSandboxId:42a4c594e59973a4c2533efc093434e5573531710ce6a4ecdc3cc9ed647d8158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177041005671448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0,PodSandboxId:ab2a446417d15b6e99d85160ca9be5ee4aa3f76cd5af1ddc919812f3b8e304ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177040980708332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f38e86c3-5d00-43a2-9e93-48e1a3d7caf0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.726629719Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96a2362f-8771-44cf-bd20-4c296da39b43 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.726723077Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96a2362f-8771-44cf-bd20-4c296da39b43 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.728773092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1adec27-51dd-4b0a-9b01-b34c5cdfe9b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.729274953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177581729253876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1adec27-51dd-4b0a-9b01-b34c5cdfe9b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.729757876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55815e57-9788-4782-acb6-b6c6b3e2512d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.729873082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55815e57-9788-4782-acb6-b6c6b3e2512d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:53:01 ha-029113 crio[675]: time="2024-07-17 00:53:01.730146221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177307303856239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba7b13f793c3a06d8fbfe335c9983449c36750149da746239b5d30f43f9e80d,PodSandboxId:50f310bc4d109b91ab1bdfc5a369ea5b936bcdfab1c3c3e494b33bc91202cdf9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177077814345806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077742709925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177077777210066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-67
92-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CO
NTAINER_RUNNING,CreatedAt:1721177065689728330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172117706
0624674491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b360fa2cf3fbbfbc9242efc6915192666c7dd6e4307f909d862b874fbaab69,PodSandboxId:bc3706b14039859f793eac4e8624e7818234de71ca60cf085454b03586bf9d2a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17211770460
87746328,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7182fcebcafa632e8046b9a13a66b9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177041034207282,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177040986161344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:425a9fc13cce841865da7956f1f32a375623313826ea8da126557d78f754b28c,PodSandboxId:42a4c594e59973a4c2533efc093434e5573531710ce6a4ecdc3cc9ed647d8158,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177041005671448,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0,PodSandboxId:ab2a446417d15b6e99d85160ca9be5ee4aa3f76cd5af1ddc919812f3b8e304ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177040980708332,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55815e57-9788-4782-acb6-b6c6b3e2512d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf4870ffc6ba7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   a45c7f17109af       busybox-fc5497c4f-pf5xn
	4ba7b13f793c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   50f310bc4d109       storage-provisioner
	708012203a1a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   9323719ef6547       coredns-7db6d8ff4d-62m67
	0f3b600dde660       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   f8a5889bb1d2b       coredns-7db6d8ff4d-xdlls
	14ce89e605287       docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381    8 minutes ago       Running             kindnet-cni               0                   a30304f1d93be       kindnet-8xg7d
	21b3cbbc53732       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      8 minutes ago       Running             kube-proxy                0                   9fc93d7901e92       kube-proxy-hg2kp
	c8b360fa2cf3f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   bc3706b140398       kube-vip-ha-029113
	535a2b743f28f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Running             etcd                      0                   9dca109899a3f       etcd-ha-029113
	425a9fc13cce8       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      9 minutes ago       Running             kube-apiserver            0                   42a4c594e5997       kube-apiserver-ha-029113
	af1a2d97ac6f8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      9 minutes ago       Running             kube-scheduler            0                   5eb5a4397caa3       kube-scheduler-ha-029113
	8ad5061362647       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      9 minutes ago       Running             kube-controller-manager   0                   ab2a446417d15       kube-controller-manager-ha-029113
	
	
	==> coredns [0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa] <==
	[INFO] 10.244.0.4:51874 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000567681s
	[INFO] 10.244.2.2:49111 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126235s
	[INFO] 10.244.2.2:36462 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000086647s
	[INFO] 10.244.2.2:55125 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000067538s
	[INFO] 10.244.1.2:39895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159278s
	[INFO] 10.244.1.2:60685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000209662s
	[INFO] 10.244.1.2:59157 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00911229s
	[INFO] 10.244.0.4:33726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164542s
	[INFO] 10.244.0.4:35638 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107633s
	[INFO] 10.244.0.4:36083 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148234s
	[INFO] 10.244.0.4:49455 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157722s
	[INFO] 10.244.2.2:43892 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122973s
	[INFO] 10.244.2.2:45729 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013946s
	[INFO] 10.244.0.4:55198 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100375s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106412s
	[INFO] 10.244.0.4:37401 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124966s
	[INFO] 10.244.0.4:60799 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012109s
	[INFO] 10.244.2.2:34189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127044s
	[INFO] 10.244.2.2:42164 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116232s
	[INFO] 10.244.2.2:45045 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090238s
	[INFO] 10.244.1.2:51035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200282s
	[INFO] 10.244.1.2:55956 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000190607s
	[INFO] 10.244.1.2:54538 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177763s
	[INFO] 10.244.0.4:33888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013825s
	[INFO] 10.244.2.2:47245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251032s
	
	
	==> coredns [708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc] <==
	[INFO] 10.244.1.2:35563 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000247156s
	[INFO] 10.244.1.2:58955 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217672s
	[INFO] 10.244.1.2:58564 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129741s
	[INFO] 10.244.0.4:42072 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00197568s
	[INFO] 10.244.0.4:42572 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001184183s
	[INFO] 10.244.0.4:59867 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056093s
	[INFO] 10.244.0.4:34082 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003321s
	[INFO] 10.244.2.2:43902 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001968463s
	[INFO] 10.244.2.2:54035 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207692s
	[INFO] 10.244.2.2:33997 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001225386s
	[INFO] 10.244.2.2:45029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109563s
	[INFO] 10.244.2.2:39017 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092433s
	[INFO] 10.244.2.2:54230 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169232s
	[INFO] 10.244.1.2:47885 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195059s
	[INFO] 10.244.1.2:52609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101063s
	[INFO] 10.244.1.2:45870 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090685s
	[INFO] 10.244.1.2:54516 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081368s
	[INFO] 10.244.2.2:33988 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080469s
	[INFO] 10.244.1.2:34772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000287318s
	[INFO] 10.244.0.4:35803 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085391s
	[INFO] 10.244.0.4:50190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000162301s
	[INFO] 10.244.0.4:40910 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130903s
	[INFO] 10.244.2.2:33875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129913s
	[INFO] 10.244.2.2:51223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090521s
	[INFO] 10.244.2.2:58679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073592s
	
	
	==> describe nodes <==
	Name:               ha-029113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_44_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:44:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:52:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:48:43 +0000   Wed, 17 Jul 2024 00:44:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:48:43 +0000   Wed, 17 Jul 2024 00:44:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:48:43 +0000   Wed, 17 Jul 2024 00:44:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:48:43 +0000   Wed, 17 Jul 2024 00:44:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-029113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a51546f0529f4ddaa3a150daaabbe791
	  System UUID:                a51546f0-529f-4dda-a3a1-50daaabbe791
	  Boot ID:                    644e2f47-3b52-421d-bf4d-394d43757773
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pf5xn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 coredns-7db6d8ff4d-62m67             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m42s
	  kube-system                 coredns-7db6d8ff4d-xdlls             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m42s
	  kube-system                 etcd-ha-029113                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m55s
	  kube-system                 kindnet-8xg7d                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m42s
	  kube-system                 kube-apiserver-ha-029113             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  kube-system                 kube-controller-manager-ha-029113    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  kube-system                 kube-proxy-hg2kp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m42s
	  kube-system                 kube-scheduler-ha-029113             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  kube-system                 kube-vip-ha-029113                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m55s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m40s  kube-proxy       
	  Normal  Starting                 8m55s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m55s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m55s  kubelet          Node ha-029113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m55s  kubelet          Node ha-029113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m55s  kubelet          Node ha-029113 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m43s  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal  NodeReady                8m25s  kubelet          Node ha-029113 status is now: NodeReady
	  Normal  RegisteredNode           6m9s   node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal  RegisteredNode           4m51s  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	
	
	Name:               ha-029113-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_46_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:49:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 00:48:40 +0000   Wed, 17 Jul 2024 00:50:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 00:48:40 +0000   Wed, 17 Jul 2024 00:50:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 00:48:40 +0000   Wed, 17 Jul 2024 00:50:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 00:48:40 +0000   Wed, 17 Jul 2024 00:50:22 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.166
	  Hostname:    ha-029113-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 caba57241163431db23fb698d4481f00
	  System UUID:                caba5724-1163-431d-b23f-b698d4481f00
	  Boot ID:                    1849ca60-159d-4fda-b3e8-c6287316fa16
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l4ctd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 etcd-ha-029113-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m23s
	  kube-system                 kindnet-k7vzq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m25s
	  kube-system                 kube-apiserver-ha-029113-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-controller-manager-ha-029113-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-proxy-2wz5p                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	  kube-system                 kube-scheduler-ha-029113-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-vip-ha-029113-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m25s (x8 over 6m25s)  kubelet          Node ha-029113-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s (x8 over 6m25s)  kubelet          Node ha-029113-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s (x7 over 6m25s)  kubelet          Node ha-029113-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m23s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  NodeNotReady             2m40s                  node-controller  Node ha-029113-m02 status is now: NodeNotReady
	
	
	Name:               ha-029113-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_47_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:47:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:53:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:48:55 +0000   Wed, 17 Jul 2024 00:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:48:55 +0000   Wed, 17 Jul 2024 00:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:48:55 +0000   Wed, 17 Jul 2024 00:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:48:55 +0000   Wed, 17 Jul 2024 00:48:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-029113-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2e1b2e5e3744938b38fb857e0123a96
	  System UUID:                d2e1b2e5-e374-4938-b38f-b857e0123a96
	  Boot ID:                    1470bc32-c0a6-4d87-8e4e-b7ae7580ad8b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w8w7k                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 etcd-ha-029113-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m6s
	  kube-system                 kindnet-k2jgh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m8s
	  kube-system                 kube-apiserver-ha-029113-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-controller-manager-ha-029113-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-proxy-pfdt9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-scheduler-ha-029113-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-vip-ha-029113-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m4s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  5m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m9s)  kubelet          Node ha-029113-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m9s)  kubelet          Node ha-029113-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m9s)  kubelet          Node ha-029113-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m7s                 node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	  Normal  RegisteredNode           5m4s                 node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	  Normal  RegisteredNode           4m51s                node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	
	
	Name:               ha-029113-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_49_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:49:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:52:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:49:34 +0000   Wed, 17 Jul 2024 00:49:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:49:34 +0000   Wed, 17 Jul 2024 00:49:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:49:34 +0000   Wed, 17 Jul 2024 00:49:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:49:34 +0000   Wed, 17 Jul 2024 00:49:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-029113-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6434efc175e64e719bbbb464b6a52834
	  System UUID:                6434efc1-75e6-4e71-9bbb-b464b6a52834
	  Boot ID:                    4d92357f-baa3-4da1-81cb-b140aac67591
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8d2dk       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m58s
	  kube-system                 kube-proxy-m559l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m58s (x2 over 3m58s)  kubelet          Node ha-029113-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s (x2 over 3m58s)  kubelet          Node ha-029113-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s (x2 over 3m58s)  kubelet          Node ha-029113-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal  NodeReady                3m37s                  kubelet          Node ha-029113-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul17 00:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049715] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039154] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.503743] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.101957] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.559408] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.079107] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.061657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066531] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.165658] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.134006] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.290239] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.155347] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.001195] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.055206] kauditd_printk_skb: 158 callbacks suppressed
	[Jul17 00:44] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.000074] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +6.568819] kauditd_printk_skb: 23 callbacks suppressed
	[ +12.108545] kauditd_printk_skb: 29 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11] <==
	{"level":"warn","ts":"2024-07-17T00:53:01.821332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:01.921393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:01.970464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:01.979728Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:01.98524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:01.997912Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.003415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.010085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.013163Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.016969Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.020927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.026229Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.032062Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.040361Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.047542Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.051354Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.062404Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.068114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.074406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.078307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.083174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.089652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.096343Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.104104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:53:02.124901Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:53:02 up 9 min,  0 users,  load average: 0.01, 0.18, 0.13
	Linux ha-029113 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e] <==
	I0717 00:52:26.817732       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:52:36.816557       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:52:36.816660       1 main.go:303] handling current node
	I0717 00:52:36.816688       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:52:36.816706       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:52:36.816918       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:52:36.816948       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:52:36.817017       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:52:36.817036       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:52:46.824109       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:52:46.824170       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:52:46.824340       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:52:46.824349       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:52:46.824397       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:52:46.824403       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:52:46.824453       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:52:46.824476       1 main.go:303] handling current node
	I0717 00:52:56.816448       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:52:56.816490       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:52:56.816629       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:52:56.816636       1 main.go:303] handling current node
	I0717 00:52:56.816646       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:52:56.816650       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:52:56.816703       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:52:56.816708       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [425a9fc13cce841865da7956f1f32a375623313826ea8da126557d78f754b28c] <==
	I0717 00:44:05.783851       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 00:44:05.823239       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.95]
	I0717 00:44:05.825457       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:44:05.880453       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:44:05.889683       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 00:44:07.296728       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 00:44:07.315459       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 00:44:07.478497       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 00:44:19.981153       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 00:44:20.058042       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0717 00:48:28.655552       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58912: use of closed network connection
	E0717 00:48:28.836739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58918: use of closed network connection
	E0717 00:48:29.024613       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58936: use of closed network connection
	E0717 00:48:29.231161       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58954: use of closed network connection
	E0717 00:48:29.417082       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58962: use of closed network connection
	E0717 00:48:29.597423       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58968: use of closed network connection
	E0717 00:48:29.770061       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53198: use of closed network connection
	E0717 00:48:29.966015       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53228: use of closed network connection
	E0717 00:48:30.140410       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53254: use of closed network connection
	E0717 00:48:30.431530       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53278: use of closed network connection
	E0717 00:48:30.598063       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53296: use of closed network connection
	E0717 00:48:30.787058       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53312: use of closed network connection
	E0717 00:48:30.954768       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53324: use of closed network connection
	E0717 00:48:31.133598       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53342: use of closed network connection
	E0717 00:48:31.324604       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53360: use of closed network connection
	
	
	==> kube-controller-manager [8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0] <==
	I0717 00:47:54.021564       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-029113-m03\" does not exist"
	I0717 00:47:54.040384       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-029113-m03" podCIDRs=["10.244.2.0/24"]
	I0717 00:47:55.009721       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-029113-m03"
	I0717 00:48:22.927156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.365216ms"
	I0717 00:48:23.083235       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="156.007355ms"
	I0717 00:48:23.319394       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="235.658965ms"
	I0717 00:48:23.381480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.96215ms"
	I0717 00:48:23.381728       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.361µs"
	I0717 00:48:23.985438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.076µs"
	I0717 00:48:27.196292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.183µs"
	I0717 00:48:27.498334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.968656ms"
	I0717 00:48:27.498416       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.311µs"
	I0717 00:48:27.690412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.025701ms"
	I0717 00:48:27.690515       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.229µs"
	I0717 00:48:28.159492       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.357821ms"
	I0717 00:48:28.159609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.001µs"
	E0717 00:49:04.181227       1 certificate_controller.go:146] Sync csr-ts9s6 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-ts9s6": the object has been modified; please apply your changes to the latest version and try again
	E0717 00:49:04.200175       1 certificate_controller.go:146] Sync csr-ts9s6 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-ts9s6": the object has been modified; please apply your changes to the latest version and try again
	I0717 00:49:04.289902       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-029113-m04\" does not exist"
	I0717 00:49:04.332777       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-029113-m04" podCIDRs=["10.244.3.0/24"]
	I0717 00:49:05.020575       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-029113-m04"
	I0717 00:49:25.719496       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-029113-m04"
	I0717 00:50:22.096635       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-029113-m04"
	I0717 00:50:22.216154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.144368ms"
	I0717 00:50:22.216282       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.008µs"
	
	
	==> kube-proxy [21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909] <==
	I0717 00:44:20.962872       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:44:21.008068       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.95"]
	I0717 00:44:21.079783       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:44:21.079872       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:44:21.079899       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:44:21.090869       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:44:21.091523       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:44:21.091559       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:44:21.102362       1 config.go:192] "Starting service config controller"
	I0717 00:44:21.102889       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:44:21.104936       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:44:21.104947       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:44:21.118916       1 config.go:319] "Starting node config controller"
	I0717 00:44:21.118947       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:44:21.204843       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:44:21.205001       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:44:21.218983       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85] <==
	W0717 00:44:05.319128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:44:05.319155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:44:05.327975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:44:05.328915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:44:05.337181       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:44:05.337203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0717 00:44:06.724112       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 00:48:22.923258       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-w8w7k\": pod busybox-fc5497c4f-w8w7k is already assigned to node \"ha-029113-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-w8w7k" node="ha-029113-m03"
	E0717 00:48:22.923515       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7490d1f3-1a14-41f1-a79b-451dd21902f7(default/busybox-fc5497c4f-w8w7k) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-w8w7k"
	E0717 00:48:22.923620       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-w8w7k\": pod busybox-fc5497c4f-w8w7k is already assigned to node \"ha-029113-m03\"" pod="default/busybox-fc5497c4f-w8w7k"
	I0717 00:48:22.923687       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-w8w7k" node="ha-029113-m03"
	E0717 00:48:22.931036       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pf5xn\": pod busybox-fc5497c4f-pf5xn is already assigned to node \"ha-029113\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pf5xn" node="ha-029113"
	E0717 00:48:22.931118       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c25795f2-3205-495b-83b1-e3afd79b87b5(default/busybox-fc5497c4f-pf5xn) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-pf5xn"
	E0717 00:48:22.931139       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pf5xn\": pod busybox-fc5497c4f-pf5xn is already assigned to node \"ha-029113\"" pod="default/busybox-fc5497c4f-pf5xn"
	I0717 00:48:22.931160       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-pf5xn" node="ha-029113"
	E0717 00:49:04.360621       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mxgns\": pod kindnet-mxgns is already assigned to node \"ha-029113-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mxgns" node="ha-029113-m04"
	E0717 00:49:04.361827       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mxgns\": pod kindnet-mxgns is already assigned to node \"ha-029113-m04\"" pod="kube-system/kindnet-mxgns"
	E0717 00:49:04.377510       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-m559l\": pod kube-proxy-m559l is already assigned to node \"ha-029113-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-m559l" node="ha-029113-m04"
	E0717 00:49:04.378263       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4bfab6d9-01f3-4918-9ea6-0dcd75f65a06(kube-system/kube-proxy-m559l) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-m559l"
	E0717 00:49:04.378516       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-m559l\": pod kube-proxy-m559l is already assigned to node \"ha-029113-m04\"" pod="kube-system/kube-proxy-m559l"
	I0717 00:49:04.378675       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-m559l" node="ha-029113-m04"
	E0717 00:49:04.417728       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rlrzj\": pod kindnet-rlrzj is already assigned to node \"ha-029113-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rlrzj" node="ha-029113-m04"
	E0717 00:49:04.417898       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 015fb38f-0f76-4843-81fc-1eaa7fcd0c79(kube-system/kindnet-rlrzj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-rlrzj"
	E0717 00:49:04.417990       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rlrzj\": pod kindnet-rlrzj is already assigned to node \"ha-029113-m04\"" pod="kube-system/kindnet-rlrzj"
	I0717 00:49:04.418024       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rlrzj" node="ha-029113-m04"
	
	
	==> kubelet <==
	Jul 17 00:48:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:48:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:48:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:48:22 ha-029113 kubelet[1354]: I0717 00:48:22.888423    1354 topology_manager.go:215] "Topology Admit Handler" podUID="c25795f2-3205-495b-83b1-e3afd79b87b5" podNamespace="default" podName="busybox-fc5497c4f-pf5xn"
	Jul 17 00:48:23 ha-029113 kubelet[1354]: I0717 00:48:23.024683    1354 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqlnr\" (UniqueName: \"kubernetes.io/projected/c25795f2-3205-495b-83b1-e3afd79b87b5-kube-api-access-pqlnr\") pod \"busybox-fc5497c4f-pf5xn\" (UID: \"c25795f2-3205-495b-83b1-e3afd79b87b5\") " pod="default/busybox-fc5497c4f-pf5xn"
	Jul 17 00:49:07 ha-029113 kubelet[1354]: E0717 00:49:07.511734    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:49:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:49:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:49:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:49:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:50:07 ha-029113 kubelet[1354]: E0717 00:50:07.512125    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:50:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:50:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:50:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:50:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:51:07 ha-029113 kubelet[1354]: E0717 00:51:07.511979    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:51:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:51:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:51:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:51:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:52:07 ha-029113 kubelet[1354]: E0717 00:52:07.512581    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:52:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:52:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:52:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:52:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-029113 -n ha-029113
helpers_test.go:261: (dbg) Run:  kubectl --context ha-029113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (52.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-029113 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-029113 -v=7 --alsologtostderr
E0717 00:54:21.425917   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-029113 -v=7 --alsologtostderr: exit status 82 (2m1.844300537s)

                                                
                                                
-- stdout --
	* Stopping node "ha-029113-m04"  ...
	* Stopping node "ha-029113-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:53:03.543639   29544 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:53:03.544062   29544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:53:03.544079   29544 out.go:304] Setting ErrFile to fd 2...
	I0717 00:53:03.544086   29544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:53:03.544513   29544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:53:03.544917   29544 out.go:298] Setting JSON to false
	I0717 00:53:03.545037   29544 mustload.go:65] Loading cluster: ha-029113
	I0717 00:53:03.545657   29544 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:53:03.545750   29544 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:53:03.545919   29544 mustload.go:65] Loading cluster: ha-029113
	I0717 00:53:03.546049   29544 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:53:03.546075   29544 stop.go:39] StopHost: ha-029113-m04
	I0717 00:53:03.546431   29544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:03.546481   29544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:03.561210   29544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38451
	I0717 00:53:03.561675   29544 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:03.562240   29544 main.go:141] libmachine: Using API Version  1
	I0717 00:53:03.562257   29544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:03.562601   29544 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:03.565151   29544 out.go:177] * Stopping node "ha-029113-m04"  ...
	I0717 00:53:03.566456   29544 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 00:53:03.566487   29544 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 00:53:03.566752   29544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 00:53:03.566775   29544 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 00:53:03.569773   29544 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:53:03.570211   29544 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:48:45 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:53:03.570238   29544 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:53:03.570352   29544 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 00:53:03.570534   29544 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 00:53:03.570686   29544 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 00:53:03.570840   29544 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	I0717 00:53:03.658984   29544 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 00:53:03.712641   29544 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 00:53:03.766349   29544 main.go:141] libmachine: Stopping "ha-029113-m04"...
	I0717 00:53:03.766371   29544 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:53:03.767753   29544 main.go:141] libmachine: (ha-029113-m04) Calling .Stop
	I0717 00:53:03.771371   29544 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 0/120
	I0717 00:53:04.931340   29544 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:53:04.932523   29544 main.go:141] libmachine: Machine "ha-029113-m04" was stopped.
	I0717 00:53:04.932538   29544 stop.go:75] duration metric: took 1.366096738s to stop
	I0717 00:53:04.932574   29544 stop.go:39] StopHost: ha-029113-m03
	I0717 00:53:04.932927   29544 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:53:04.932971   29544 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:53:04.947688   29544 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43531
	I0717 00:53:04.948058   29544 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:53:04.948517   29544 main.go:141] libmachine: Using API Version  1
	I0717 00:53:04.948543   29544 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:53:04.948852   29544 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:53:04.950710   29544 out.go:177] * Stopping node "ha-029113-m03"  ...
	I0717 00:53:04.952058   29544 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 00:53:04.952075   29544 main.go:141] libmachine: (ha-029113-m03) Calling .DriverName
	I0717 00:53:04.952263   29544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 00:53:04.952279   29544 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHHostname
	I0717 00:53:04.954819   29544 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:53:04.955210   29544 main.go:141] libmachine: (ha-029113-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:b5:1d", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:47:18 +0000 UTC Type:0 Mac:52:54:00:30:b5:1d Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-029113-m03 Clientid:01:52:54:00:30:b5:1d}
	I0717 00:53:04.955237   29544 main.go:141] libmachine: (ha-029113-m03) DBG | domain ha-029113-m03 has defined IP address 192.168.39.100 and MAC address 52:54:00:30:b5:1d in network mk-ha-029113
	I0717 00:53:04.955409   29544 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHPort
	I0717 00:53:04.955572   29544 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHKeyPath
	I0717 00:53:04.955690   29544 main.go:141] libmachine: (ha-029113-m03) Calling .GetSSHUsername
	I0717 00:53:04.955811   29544 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m03/id_rsa Username:docker}
	I0717 00:53:05.039031   29544 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 00:53:05.094110   29544 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 00:53:05.148360   29544 main.go:141] libmachine: Stopping "ha-029113-m03"...
	I0717 00:53:05.148382   29544 main.go:141] libmachine: (ha-029113-m03) Calling .GetState
	I0717 00:53:05.149882   29544 main.go:141] libmachine: (ha-029113-m03) Calling .Stop
	I0717 00:53:05.153341   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 0/120
	I0717 00:53:06.154733   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 1/120
	I0717 00:53:07.156064   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 2/120
	I0717 00:53:08.157268   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 3/120
	I0717 00:53:09.158641   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 4/120
	I0717 00:53:10.160646   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 5/120
	I0717 00:53:11.162013   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 6/120
	I0717 00:53:12.163636   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 7/120
	I0717 00:53:13.165036   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 8/120
	I0717 00:53:14.166519   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 9/120
	I0717 00:53:15.168170   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 10/120
	I0717 00:53:16.169664   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 11/120
	I0717 00:53:17.171067   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 12/120
	I0717 00:53:18.172697   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 13/120
	I0717 00:53:19.174189   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 14/120
	I0717 00:53:20.175681   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 15/120
	I0717 00:53:21.177135   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 16/120
	I0717 00:53:22.178539   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 17/120
	I0717 00:53:23.179930   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 18/120
	I0717 00:53:24.181398   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 19/120
	I0717 00:53:25.183263   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 20/120
	I0717 00:53:26.184838   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 21/120
	I0717 00:53:27.186090   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 22/120
	I0717 00:53:28.187508   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 23/120
	I0717 00:53:29.189275   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 24/120
	I0717 00:53:30.191269   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 25/120
	I0717 00:53:31.193143   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 26/120
	I0717 00:53:32.194711   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 27/120
	I0717 00:53:33.196060   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 28/120
	I0717 00:53:34.197474   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 29/120
	I0717 00:53:35.199472   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 30/120
	I0717 00:53:36.200865   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 31/120
	I0717 00:53:37.202898   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 32/120
	I0717 00:53:38.204476   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 33/120
	I0717 00:53:39.205999   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 34/120
	I0717 00:53:40.207732   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 35/120
	I0717 00:53:41.209006   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 36/120
	I0717 00:53:42.210362   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 37/120
	I0717 00:53:43.211803   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 38/120
	I0717 00:53:44.213143   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 39/120
	I0717 00:53:45.214890   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 40/120
	I0717 00:53:46.216018   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 41/120
	I0717 00:53:47.217332   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 42/120
	I0717 00:53:48.218735   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 43/120
	I0717 00:53:49.219941   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 44/120
	I0717 00:53:50.221570   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 45/120
	I0717 00:53:51.223096   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 46/120
	I0717 00:53:52.225073   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 47/120
	I0717 00:53:53.226352   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 48/120
	I0717 00:53:54.227769   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 49/120
	I0717 00:53:55.229295   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 50/120
	I0717 00:53:56.230420   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 51/120
	I0717 00:53:57.231591   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 52/120
	I0717 00:53:58.233109   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 53/120
	I0717 00:53:59.234470   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 54/120
	I0717 00:54:00.236016   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 55/120
	I0717 00:54:01.237189   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 56/120
	I0717 00:54:02.238487   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 57/120
	I0717 00:54:03.239970   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 58/120
	I0717 00:54:04.241270   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 59/120
	I0717 00:54:05.242950   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 60/120
	I0717 00:54:06.245142   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 61/120
	I0717 00:54:07.246353   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 62/120
	I0717 00:54:08.247685   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 63/120
	I0717 00:54:09.248894   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 64/120
	I0717 00:54:10.250442   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 65/120
	I0717 00:54:11.251704   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 66/120
	I0717 00:54:12.252824   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 67/120
	I0717 00:54:13.254270   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 68/120
	I0717 00:54:14.255934   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 69/120
	I0717 00:54:15.257839   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 70/120
	I0717 00:54:16.259063   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 71/120
	I0717 00:54:17.260685   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 72/120
	I0717 00:54:18.262016   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 73/120
	I0717 00:54:19.263372   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 74/120
	I0717 00:54:20.265395   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 75/120
	I0717 00:54:21.267474   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 76/120
	I0717 00:54:22.268750   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 77/120
	I0717 00:54:23.270674   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 78/120
	I0717 00:54:24.271994   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 79/120
	I0717 00:54:25.273986   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 80/120
	I0717 00:54:26.275308   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 81/120
	I0717 00:54:27.276866   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 82/120
	I0717 00:54:28.278587   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 83/120
	I0717 00:54:29.280038   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 84/120
	I0717 00:54:30.281736   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 85/120
	I0717 00:54:31.282905   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 86/120
	I0717 00:54:32.284955   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 87/120
	I0717 00:54:33.286207   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 88/120
	I0717 00:54:34.287546   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 89/120
	I0717 00:54:35.289201   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 90/120
	I0717 00:54:36.291068   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 91/120
	I0717 00:54:37.292878   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 92/120
	I0717 00:54:38.294327   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 93/120
	I0717 00:54:39.295624   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 94/120
	I0717 00:54:40.297798   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 95/120
	I0717 00:54:41.299154   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 96/120
	I0717 00:54:42.301249   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 97/120
	I0717 00:54:43.303000   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 98/120
	I0717 00:54:44.304443   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 99/120
	I0717 00:54:45.306214   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 100/120
	I0717 00:54:46.307590   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 101/120
	I0717 00:54:47.308810   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 102/120
	I0717 00:54:48.310083   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 103/120
	I0717 00:54:49.311397   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 104/120
	I0717 00:54:50.312736   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 105/120
	I0717 00:54:51.314258   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 106/120
	I0717 00:54:52.315419   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 107/120
	I0717 00:54:53.316648   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 108/120
	I0717 00:54:54.318148   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 109/120
	I0717 00:54:55.319560   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 110/120
	I0717 00:54:56.321070   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 111/120
	I0717 00:54:57.322133   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 112/120
	I0717 00:54:58.323685   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 113/120
	I0717 00:54:59.324835   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 114/120
	I0717 00:55:00.326857   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 115/120
	I0717 00:55:01.328232   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 116/120
	I0717 00:55:02.329530   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 117/120
	I0717 00:55:03.330805   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 118/120
	I0717 00:55:04.332035   29544 main.go:141] libmachine: (ha-029113-m03) Waiting for machine to stop 119/120
	I0717 00:55:05.332978   29544 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 00:55:05.333028   29544 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 00:55:05.334971   29544 out.go:177] 
	W0717 00:55:05.336388   29544 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 00:55:05.336403   29544 out.go:239] * 
	* 
	W0717 00:55:05.338750   29544 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 00:55:05.340892   29544 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-029113 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-029113 --wait=true -v=7 --alsologtostderr
E0717 00:55:17.182916   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:57:58.379237   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-029113 --wait=true -v=7 --alsologtostderr: (4m2.352407052s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-029113
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-029113 -n ha-029113
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-029113 logs -n 25: (1.977038844s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m02:/home/docker/cp-test_ha-029113-m03_ha-029113-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m02 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m03_ha-029113-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04:/home/docker/cp-test_ha-029113-m03_ha-029113-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m04 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m03_ha-029113-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp testdata/cp-test.txt                                               | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile695400083/001/cp-test_ha-029113-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113:/home/docker/cp-test_ha-029113-m04_ha-029113.txt                      |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113 sudo cat                                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113.txt                                |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m02:/home/docker/cp-test_ha-029113-m04_ha-029113-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m02 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03:/home/docker/cp-test_ha-029113-m04_ha-029113-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m03 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-029113 node stop m02 -v=7                                                    | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-029113 node start m02 -v=7                                                   | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-029113 -v=7                                                          | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:53 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-029113 -v=7                                                               | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:53 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-029113 --wait=true -v=7                                                   | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:55 UTC | 17 Jul 24 00:59 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-029113                                                               | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:59 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:55:05
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:55:05.383491   29983 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:55:05.383937   29983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:55:05.383950   29983 out.go:304] Setting ErrFile to fd 2...
	I0717 00:55:05.383957   29983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:55:05.384474   29983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:55:05.385384   29983 out.go:298] Setting JSON to false
	I0717 00:55:05.386429   29983 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2247,"bootTime":1721175458,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:55:05.386483   29983 start.go:139] virtualization: kvm guest
	I0717 00:55:05.388526   29983 out.go:177] * [ha-029113] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:55:05.389860   29983 notify.go:220] Checking for updates...
	I0717 00:55:05.389875   29983 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:55:05.391232   29983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:55:05.392479   29983 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:55:05.394050   29983 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:55:05.395692   29983 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:55:05.397068   29983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:55:05.398872   29983 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:55:05.398955   29983 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:55:05.399345   29983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:55:05.399394   29983 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:55:05.417832   29983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44409
	I0717 00:55:05.418300   29983 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:55:05.418850   29983 main.go:141] libmachine: Using API Version  1
	I0717 00:55:05.418869   29983 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:55:05.419197   29983 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:55:05.419361   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:55:05.453309   29983 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:55:05.454537   29983 start.go:297] selected driver: kvm2
	I0717 00:55:05.454563   29983 start.go:901] validating driver "kvm2" against &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:55:05.454726   29983 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:55:05.455073   29983 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:55:05.455140   29983 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:55:05.469318   29983 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:55:05.469919   29983 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:55:05.469973   29983 cni.go:84] Creating CNI manager for ""
	I0717 00:55:05.469984   29983 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:55:05.470037   29983 start.go:340] cluster config:
	{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:55:05.470149   29983 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:55:05.472712   29983 out.go:177] * Starting "ha-029113" primary control-plane node in "ha-029113" cluster
	I0717 00:55:05.474260   29983 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:55:05.474290   29983 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:55:05.474298   29983 cache.go:56] Caching tarball of preloaded images
	I0717 00:55:05.474389   29983 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:55:05.474401   29983 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:55:05.474514   29983 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:55:05.474724   29983 start.go:360] acquireMachinesLock for ha-029113: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:55:05.474772   29983 start.go:364] duration metric: took 30.592µs to acquireMachinesLock for "ha-029113"
	I0717 00:55:05.474799   29983 start.go:96] Skipping create...Using existing machine configuration
	I0717 00:55:05.474808   29983 fix.go:54] fixHost starting: 
	I0717 00:55:05.475043   29983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:55:05.475075   29983 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:55:05.488771   29983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0717 00:55:05.489182   29983 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:55:05.489695   29983 main.go:141] libmachine: Using API Version  1
	I0717 00:55:05.489718   29983 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:55:05.490020   29983 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:55:05.490175   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:55:05.490319   29983 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:55:05.491722   29983 fix.go:112] recreateIfNeeded on ha-029113: state=Running err=<nil>
	W0717 00:55:05.491744   29983 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 00:55:05.493806   29983 out.go:177] * Updating the running kvm2 "ha-029113" VM ...
	I0717 00:55:05.495165   29983 machine.go:94] provisionDockerMachine start ...
	I0717 00:55:05.495182   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:55:05.495368   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:05.497631   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.498052   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.498074   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.498267   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:55:05.498417   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.498568   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.498741   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:55:05.498897   29983 main.go:141] libmachine: Using SSH client type: native
	I0717 00:55:05.499078   29983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:55:05.499090   29983 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:55:05.603589   29983 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-029113
	
	I0717 00:55:05.603623   29983 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:55:05.603866   29983 buildroot.go:166] provisioning hostname "ha-029113"
	I0717 00:55:05.603890   29983 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:55:05.604063   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:05.606724   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.607141   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.607160   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.607311   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:55:05.607473   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.607619   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.607749   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:55:05.607904   29983 main.go:141] libmachine: Using SSH client type: native
	I0717 00:55:05.608046   29983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:55:05.608058   29983 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-029113 && echo "ha-029113" | sudo tee /etc/hostname
	I0717 00:55:05.724124   29983 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-029113
	
	I0717 00:55:05.724150   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:05.726891   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.727238   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.727264   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.727421   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:55:05.727637   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.727785   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.727926   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:55:05.728084   29983 main.go:141] libmachine: Using SSH client type: native
	I0717 00:55:05.728246   29983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:55:05.728274   29983 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-029113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-029113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-029113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:55:05.827946   29983 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:55:05.827980   29983 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 00:55:05.828003   29983 buildroot.go:174] setting up certificates
	I0717 00:55:05.828014   29983 provision.go:84] configureAuth start
	I0717 00:55:05.828028   29983 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:55:05.828293   29983 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:55:05.831015   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.831537   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.831567   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.831745   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:05.833696   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.834021   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.834048   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.834146   29983 provision.go:143] copyHostCerts
	I0717 00:55:05.834174   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:55:05.834239   29983 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 00:55:05.834255   29983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:55:05.834338   29983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 00:55:05.834440   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:55:05.834464   29983 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 00:55:05.834470   29983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:55:05.834515   29983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 00:55:05.834606   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:55:05.834629   29983 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 00:55:05.834638   29983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:55:05.834671   29983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 00:55:05.834749   29983 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.ha-029113 san=[127.0.0.1 192.168.39.95 ha-029113 localhost minikube]
	I0717 00:55:05.974789   29983 provision.go:177] copyRemoteCerts
	I0717 00:55:05.974862   29983 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:55:05.974887   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:05.977324   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.977683   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.977711   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.977898   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:55:05.978088   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.978255   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:55:05.978391   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:55:06.057496   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:55:06.057564   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:55:06.088795   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:55:06.088853   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:55:06.114723   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:55:06.114780   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 00:55:06.146652   29983 provision.go:87] duration metric: took 318.61965ms to configureAuth
	I0717 00:55:06.146681   29983 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:55:06.146923   29983 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:55:06.147010   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:06.149622   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:06.149996   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:06.150020   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:06.150195   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:55:06.150397   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:06.150573   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:06.150709   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:55:06.150869   29983 main.go:141] libmachine: Using SSH client type: native
	I0717 00:55:06.151033   29983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:55:06.151051   29983 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:56:36.922337   29983 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:56:36.922366   29983 machine.go:97] duration metric: took 1m31.427187344s to provisionDockerMachine
	I0717 00:56:36.922378   29983 start.go:293] postStartSetup for "ha-029113" (driver="kvm2")
	I0717 00:56:36.922388   29983 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:56:36.922401   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:36.922709   29983 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:56:36.922731   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:56:36.925696   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:36.926069   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:36.926098   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:36.926198   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:56:36.926367   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:36.926528   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:56:36.926646   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:56:37.005605   29983 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:56:37.009755   29983 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:56:37.009775   29983 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 00:56:37.009823   29983 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 00:56:37.009894   29983 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 00:56:37.009903   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /etc/ssl/certs/112592.pem
	I0717 00:56:37.010000   29983 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:56:37.019312   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:56:37.047579   29983 start.go:296] duration metric: took 125.18763ms for postStartSetup
	I0717 00:56:37.047619   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:37.047920   29983 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 00:56:37.047943   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:56:37.050582   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.051006   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:37.051029   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.051189   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:56:37.051377   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:37.051537   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:56:37.051697   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	W0717 00:56:37.133080   29983 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0717 00:56:37.133102   29983 fix.go:56] duration metric: took 1m31.65829482s for fixHost
	I0717 00:56:37.133125   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:56:37.135706   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.136075   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:37.136103   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.136215   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:56:37.136405   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:37.136575   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:37.136723   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:56:37.136882   29983 main.go:141] libmachine: Using SSH client type: native
	I0717 00:56:37.137108   29983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:56:37.137125   29983 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:56:37.251355   29983 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177797.223322601
	
	I0717 00:56:37.251392   29983 fix.go:216] guest clock: 1721177797.223322601
	I0717 00:56:37.251401   29983 fix.go:229] Guest: 2024-07-17 00:56:37.223322601 +0000 UTC Remote: 2024-07-17 00:56:37.133109222 +0000 UTC m=+91.782309028 (delta=90.213379ms)
	I0717 00:56:37.251434   29983 fix.go:200] guest clock delta is within tolerance: 90.213379ms
	I0717 00:56:37.251439   29983 start.go:83] releasing machines lock for "ha-029113", held for 1m31.776656084s
	I0717 00:56:37.251461   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:37.251716   29983 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:56:37.254471   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.254864   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:37.254881   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.255059   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:37.255616   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:37.255785   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:37.255866   29983 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:56:37.255912   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:56:37.255986   29983 ssh_runner.go:195] Run: cat /version.json
	I0717 00:56:37.256003   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:56:37.258661   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.258913   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.259053   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:37.259082   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.259183   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:56:37.259282   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:37.259360   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.259576   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:56:37.259591   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:37.259746   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:37.259761   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:56:37.259936   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:56:37.259995   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:56:37.260096   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:56:37.332211   29983 ssh_runner.go:195] Run: systemctl --version
	I0717 00:56:37.358582   29983 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:56:37.519154   29983 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:56:37.524824   29983 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:56:37.524886   29983 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:56:37.533955   29983 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 00:56:37.533974   29983 start.go:495] detecting cgroup driver to use...
	I0717 00:56:37.534019   29983 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:56:37.550886   29983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:56:37.564477   29983 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:56:37.564535   29983 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:56:37.577933   29983 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:56:37.591423   29983 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:56:37.744811   29983 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:56:37.883175   29983 docker.go:233] disabling docker service ...
	I0717 00:56:37.883250   29983 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:56:37.899939   29983 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:56:37.912784   29983 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:56:38.053345   29983 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:56:38.201480   29983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:56:38.216109   29983 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:56:38.236452   29983 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:56:38.236521   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.247620   29983 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:56:38.247679   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.258380   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.268968   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.279830   29983 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:56:38.290311   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.300507   29983 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.311975   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.322247   29983 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:56:38.331495   29983 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:56:38.340950   29983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:56:38.481153   29983 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:56:38.757154   29983 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:56:38.757233   29983 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:56:38.762884   29983 start.go:563] Will wait 60s for crictl version
	I0717 00:56:38.762936   29983 ssh_runner.go:195] Run: which crictl
	I0717 00:56:38.766933   29983 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:56:38.802395   29983 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:56:38.802477   29983 ssh_runner.go:195] Run: crio --version
	I0717 00:56:38.835518   29983 ssh_runner.go:195] Run: crio --version
	I0717 00:56:38.866346   29983 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:56:38.867786   29983 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:56:38.870376   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:38.870822   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:38.870848   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:38.871035   29983 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:56:38.875939   29983 kubeadm.go:883] updating cluster {Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:56:38.876067   29983 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:56:38.876101   29983 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:56:38.922327   29983 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:56:38.922349   29983 crio.go:433] Images already preloaded, skipping extraction
	I0717 00:56:38.922427   29983 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:56:38.959437   29983 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:56:38.959457   29983 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:56:38.959465   29983 kubeadm.go:934] updating node { 192.168.39.95 8443 v1.30.2 crio true true} ...
	I0717 00:56:38.959568   29983 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-029113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:56:38.959649   29983 ssh_runner.go:195] Run: crio config
	I0717 00:56:39.013116   29983 cni.go:84] Creating CNI manager for ""
	I0717 00:56:39.013136   29983 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:56:39.013146   29983 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:56:39.013175   29983 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-029113 NodeName:ha-029113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:56:39.013307   29983 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-029113"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:56:39.013325   29983 kube-vip.go:115] generating kube-vip config ...
	I0717 00:56:39.013366   29983 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:56:39.025164   29983 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:56:39.025279   29983 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:56:39.025330   29983 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:56:39.035063   29983 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:56:39.035135   29983 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 00:56:39.044312   29983 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0717 00:56:39.060961   29983 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:56:39.077560   29983 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0717 00:56:39.094429   29983 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:56:39.112781   29983 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:56:39.116810   29983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:56:39.260438   29983 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:56:39.276419   29983 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113 for IP: 192.168.39.95
	I0717 00:56:39.276455   29983 certs.go:194] generating shared ca certs ...
	I0717 00:56:39.276469   29983 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:56:39.276640   29983 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 00:56:39.276688   29983 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 00:56:39.276696   29983 certs.go:256] generating profile certs ...
	I0717 00:56:39.276807   29983 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key
	I0717 00:56:39.276842   29983 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.5f30ed13
	I0717 00:56:39.276862   29983 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.5f30ed13 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.95 192.168.39.166 192.168.39.100 192.168.39.254]
	I0717 00:56:39.417192   29983 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.5f30ed13 ...
	I0717 00:56:39.417223   29983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.5f30ed13: {Name:mka5e562e601efbe0a1950f918014c0baf1c3196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:56:39.417392   29983 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.5f30ed13 ...
	I0717 00:56:39.417404   29983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.5f30ed13: {Name:mkaf79bf149acd16cf17ccae5a21d9e04c41a0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:56:39.417472   29983 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.5f30ed13 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt
	I0717 00:56:39.417602   29983 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.5f30ed13 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key
	I0717 00:56:39.417718   29983 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key
	I0717 00:56:39.417732   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:56:39.417744   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:56:39.417755   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:56:39.417767   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:56:39.417779   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:56:39.417789   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:56:39.417800   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:56:39.417812   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:56:39.417896   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 00:56:39.417937   29983 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 00:56:39.417946   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:56:39.417970   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:56:39.417999   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:56:39.418028   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 00:56:39.418063   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:56:39.418088   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:56:39.418101   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem -> /usr/share/ca-certificates/11259.pem
	I0717 00:56:39.418113   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /usr/share/ca-certificates/112592.pem
	I0717 00:56:39.418636   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:56:39.444603   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:56:39.468830   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:56:39.492862   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:56:39.516126   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 00:56:39.539010   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:56:39.562601   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:56:39.587034   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:56:39.610364   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:56:39.633229   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 00:56:39.655960   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 00:56:39.679007   29983 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:56:39.695395   29983 ssh_runner.go:195] Run: openssl version
	I0717 00:56:39.701087   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 00:56:39.712227   29983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 00:56:39.716576   29983 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 00:56:39.716620   29983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 00:56:39.722207   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:56:39.731284   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:56:39.741551   29983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:56:39.745990   29983 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:56:39.746033   29983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:56:39.752288   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:56:39.761612   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 00:56:39.772522   29983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 00:56:39.798690   29983 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 00:56:39.798786   29983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 00:56:39.838893   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 00:56:39.852506   29983 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:56:39.869058   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 00:56:39.877232   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 00:56:39.887825   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 00:56:39.917174   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 00:56:39.931385   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 00:56:40.012309   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 00:56:40.031972   29983 kubeadm.go:392] StartCluster: {Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:56:40.032091   29983 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:56:40.032163   29983 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:56:40.312233   29983 cri.go:89] found id: "68f837469c555571a915d20ae768d0ef5c7c7dbd1860e545e596fe6c20674da3"
	I0717 00:56:40.312257   29983 cri.go:89] found id: "f5a9880ef5b625bad2f5157bf22504ce6e66f5f00d6c08e82ff184c60e4597df"
	I0717 00:56:40.312263   29983 cri.go:89] found id: "b3e15314572524bc8ab46c72e1e61c148971453ca54384e37efa2a758b66e153"
	I0717 00:56:40.312268   29983 cri.go:89] found id: "708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc"
	I0717 00:56:40.312272   29983 cri.go:89] found id: "0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa"
	I0717 00:56:40.312276   29983 cri.go:89] found id: "14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e"
	I0717 00:56:40.312280   29983 cri.go:89] found id: "21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909"
	I0717 00:56:40.312283   29983 cri.go:89] found id: "535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11"
	I0717 00:56:40.312287   29983 cri.go:89] found id: "af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85"
	I0717 00:56:40.312295   29983 cri.go:89] found id: "8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0"
	I0717 00:56:40.312300   29983 cri.go:89] found id: ""
	I0717 00:56:40.312349   29983 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.435103480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d684652-ce00-400f-959b-09f88f122824 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.435554477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af51385090a4705d923df398d40d5eb591f0facb435053b22a9ac19fd2c5d77,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177904445420635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177847467500435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a549df20ed996b85dfd4a228aafa38755eb325c2b8eea9d194a2f39a43c6f997,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177843451437451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f98a847746b8a00d932181e11b880825032ec553e6de48f6736a756f14df2,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177842445013762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0474fb323a70672e8f200cddcb9f7c9eb582f1c30aad6b8c69e99a1d86da9ca,PodSandboxId:d9df264711dcb7bb9b84b414a85826bc58ef13be7d716a88cc51ceec878d5b6d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177833782339596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea42e5105cf6ac5a5d0ba80d2db43d797d8b7ed3be4351ebdfbba6932081003,PodSandboxId:e21bd8195ae4f1c5d26e7e689a507f5065302c1692275061cdd6f99690de8b4b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721177816083064296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3a6e785cd2094ab5381e06e8b1758d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3dd104a40414a4c184f292042d98c34c4f2a081a0be74523d1c7da92597f487,PodSandboxId:12b80a1b9425668e9ab47d88685121897996822623edab194b9fb1ac50d00fc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721177800740417213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7861b2bf7bfee994fa80936b6d31f5ccf7960c4e0a63bd4312af282fca47083f,PodSandboxId:f692177694e3d7c7e6a87ec2b166f6cac56503094e7e2649eca372934922baf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800682850306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d7b223099b0137ef8cae73ccac8ee50e2f9c818656cb2c2674f8d8d1514fd0,PodSandboxId:b7cdeb95f25db3f6f2503c8abf76267b222caacd21d0b1a8e67e34b512229584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800522460835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcb54666e9d9613e1f68d0f760cc3e0c95b06e2fe3747f5a97c72ba0a238d16,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177800526688585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-cont
roller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effa58e46e219c34739e40820dbe2178f075c45e540541619e339690fa06184,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177800508238100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-0291
13,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:900f665d54096f7765fdf4465f1760855d73180627d403518f43e27d98beda89,PodSandboxId:bec07272b68700204eb042d072de745e98b89e96c1ca645a5926e6ddb134986e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721177800296153947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7221f2622be960846131245cf8a060743228bd2fb4267445a85ad2461e84f042,PodSandboxId:30c1fcbc2ced4efc64c53697813def8a44ad5787e4d0059a9098e5a0ee6315d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177800493353171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8,PodSandboxId:664b36c878574a28b197c9c401e29493c78b7a534fc931c89a99b95559ad0030,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177800219412908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177307303963678,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077742775453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kuber
netes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077777282664,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721177065689995452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177060624682903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177041034280990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b94
0,State:CONTAINER_EXITED,CreatedAt:1721177040986220008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d684652-ce00-400f-959b-09f88f122824 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.487564342Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94355880-a4d6-416b-ac46-87a5455fc6b7 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.487716423Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94355880-a4d6-416b-ac46-87a5455fc6b7 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.494063013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9803342f-5bba-456e-887b-45b4ac53e76a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.494840752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177948494773092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9803342f-5bba-456e-887b-45b4ac53e76a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.495627379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fbe7285-e5a8-4ee9-b7a6-0a0e83a5dbca name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.495680207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fbe7285-e5a8-4ee9-b7a6-0a0e83a5dbca name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.496124753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af51385090a4705d923df398d40d5eb591f0facb435053b22a9ac19fd2c5d77,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177904445420635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177847467500435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a549df20ed996b85dfd4a228aafa38755eb325c2b8eea9d194a2f39a43c6f997,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177843451437451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f98a847746b8a00d932181e11b880825032ec553e6de48f6736a756f14df2,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177842445013762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0474fb323a70672e8f200cddcb9f7c9eb582f1c30aad6b8c69e99a1d86da9ca,PodSandboxId:d9df264711dcb7bb9b84b414a85826bc58ef13be7d716a88cc51ceec878d5b6d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177833782339596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea42e5105cf6ac5a5d0ba80d2db43d797d8b7ed3be4351ebdfbba6932081003,PodSandboxId:e21bd8195ae4f1c5d26e7e689a507f5065302c1692275061cdd6f99690de8b4b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721177816083064296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3a6e785cd2094ab5381e06e8b1758d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3dd104a40414a4c184f292042d98c34c4f2a081a0be74523d1c7da92597f487,PodSandboxId:12b80a1b9425668e9ab47d88685121897996822623edab194b9fb1ac50d00fc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721177800740417213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7861b2bf7bfee994fa80936b6d31f5ccf7960c4e0a63bd4312af282fca47083f,PodSandboxId:f692177694e3d7c7e6a87ec2b166f6cac56503094e7e2649eca372934922baf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800682850306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d7b223099b0137ef8cae73ccac8ee50e2f9c818656cb2c2674f8d8d1514fd0,PodSandboxId:b7cdeb95f25db3f6f2503c8abf76267b222caacd21d0b1a8e67e34b512229584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800522460835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcb54666e9d9613e1f68d0f760cc3e0c95b06e2fe3747f5a97c72ba0a238d16,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177800526688585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-cont
roller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effa58e46e219c34739e40820dbe2178f075c45e540541619e339690fa06184,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177800508238100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-0291
13,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:900f665d54096f7765fdf4465f1760855d73180627d403518f43e27d98beda89,PodSandboxId:bec07272b68700204eb042d072de745e98b89e96c1ca645a5926e6ddb134986e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721177800296153947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7221f2622be960846131245cf8a060743228bd2fb4267445a85ad2461e84f042,PodSandboxId:30c1fcbc2ced4efc64c53697813def8a44ad5787e4d0059a9098e5a0ee6315d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177800493353171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8,PodSandboxId:664b36c878574a28b197c9c401e29493c78b7a534fc931c89a99b95559ad0030,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177800219412908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177307303963678,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077742775453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kuber
netes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077777282664,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721177065689995452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177060624682903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177041034280990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b94
0,State:CONTAINER_EXITED,CreatedAt:1721177040986220008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fbe7285-e5a8-4ee9-b7a6-0a0e83a5dbca name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.561442435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b274d3e2-a667-464e-b00f-905e406532fc name=/runtime.v1.RuntimeService/Version
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.561518690Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b274d3e2-a667-464e-b00f-905e406532fc name=/runtime.v1.RuntimeService/Version
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.564096303Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ac1c113-ff2f-4abf-878b-53d06f4dfc78 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.564660487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177948564634498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ac1c113-ff2f-4abf-878b-53d06f4dfc78 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.565383057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cc8604a-8643-4b2a-9b4b-4c059ed70a94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.565461490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cc8604a-8643-4b2a-9b4b-4c059ed70a94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.566421263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af51385090a4705d923df398d40d5eb591f0facb435053b22a9ac19fd2c5d77,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177904445420635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177847467500435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a549df20ed996b85dfd4a228aafa38755eb325c2b8eea9d194a2f39a43c6f997,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177843451437451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f98a847746b8a00d932181e11b880825032ec553e6de48f6736a756f14df2,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177842445013762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0474fb323a70672e8f200cddcb9f7c9eb582f1c30aad6b8c69e99a1d86da9ca,PodSandboxId:d9df264711dcb7bb9b84b414a85826bc58ef13be7d716a88cc51ceec878d5b6d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177833782339596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea42e5105cf6ac5a5d0ba80d2db43d797d8b7ed3be4351ebdfbba6932081003,PodSandboxId:e21bd8195ae4f1c5d26e7e689a507f5065302c1692275061cdd6f99690de8b4b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721177816083064296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3a6e785cd2094ab5381e06e8b1758d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3dd104a40414a4c184f292042d98c34c4f2a081a0be74523d1c7da92597f487,PodSandboxId:12b80a1b9425668e9ab47d88685121897996822623edab194b9fb1ac50d00fc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721177800740417213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7861b2bf7bfee994fa80936b6d31f5ccf7960c4e0a63bd4312af282fca47083f,PodSandboxId:f692177694e3d7c7e6a87ec2b166f6cac56503094e7e2649eca372934922baf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800682850306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d7b223099b0137ef8cae73ccac8ee50e2f9c818656cb2c2674f8d8d1514fd0,PodSandboxId:b7cdeb95f25db3f6f2503c8abf76267b222caacd21d0b1a8e67e34b512229584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800522460835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcb54666e9d9613e1f68d0f760cc3e0c95b06e2fe3747f5a97c72ba0a238d16,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177800526688585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-cont
roller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effa58e46e219c34739e40820dbe2178f075c45e540541619e339690fa06184,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177800508238100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-0291
13,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:900f665d54096f7765fdf4465f1760855d73180627d403518f43e27d98beda89,PodSandboxId:bec07272b68700204eb042d072de745e98b89e96c1ca645a5926e6ddb134986e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721177800296153947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7221f2622be960846131245cf8a060743228bd2fb4267445a85ad2461e84f042,PodSandboxId:30c1fcbc2ced4efc64c53697813def8a44ad5787e4d0059a9098e5a0ee6315d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177800493353171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8,PodSandboxId:664b36c878574a28b197c9c401e29493c78b7a534fc931c89a99b95559ad0030,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177800219412908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177307303963678,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077742775453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kuber
netes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077777282664,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721177065689995452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177060624682903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177041034280990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b94
0,State:CONTAINER_EXITED,CreatedAt:1721177040986220008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cc8604a-8643-4b2a-9b4b-4c059ed70a94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.596477887Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=1fe02106-7ead-4692-91bd-c538640ef277 name=/runtime.v1.ImageService/ListImages
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.597107071Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.2],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816 registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d],Size_:117609954,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b],Size_:112194888,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{
Id:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.2],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b],Size_:63051080,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,RepoTags:[registry.k8s.io/kube-proxy:v1.30.2],RepoDigests:[registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961 registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec],Size_:85953433,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 re
gistry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d8
67d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,RepoTags:[docker.io/kindest/kindnetd:v20240513-cd2ac642],RepoDigests:[docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266 docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8],Size_:65908273,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kub
e-vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,RepoTags:[docker.io/kindest/kindnetd:v20240715-f6ad1f6e],RepoDigests:[docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381 docker.io/kindest/kindnetd@sha256:d61a2b3d0a49f21f2556f20ae629282e5b4076940972ac659d8cda1cdc6f9a20],Size_:87166004,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=1fe02106-7ead-4692-91bd-c538640ef277 name=/runtim
e.v1.ImageService/ListImages
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.637520532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=866251df-f164-4b0c-9e64-0cabd630cd20 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.637611464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=866251df-f164-4b0c-9e64-0cabd630cd20 name=/runtime.v1.RuntimeService/Version
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.638876998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9577bbfc-310e-4282-85e5-2b8b17ecd20f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.639548499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721177948639513834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9577bbfc-310e-4282-85e5-2b8b17ecd20f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.640291778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40fa6976-28c7-4807-a1b6-db4b71b12c0b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.640521300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40fa6976-28c7-4807-a1b6-db4b71b12c0b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 00:59:08 ha-029113 crio[3725]: time="2024-07-17 00:59:08.641308589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af51385090a4705d923df398d40d5eb591f0facb435053b22a9ac19fd2c5d77,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177904445420635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177847467500435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a549df20ed996b85dfd4a228aafa38755eb325c2b8eea9d194a2f39a43c6f997,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177843451437451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f98a847746b8a00d932181e11b880825032ec553e6de48f6736a756f14df2,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177842445013762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0474fb323a70672e8f200cddcb9f7c9eb582f1c30aad6b8c69e99a1d86da9ca,PodSandboxId:d9df264711dcb7bb9b84b414a85826bc58ef13be7d716a88cc51ceec878d5b6d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177833782339596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea42e5105cf6ac5a5d0ba80d2db43d797d8b7ed3be4351ebdfbba6932081003,PodSandboxId:e21bd8195ae4f1c5d26e7e689a507f5065302c1692275061cdd6f99690de8b4b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721177816083064296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3a6e785cd2094ab5381e06e8b1758d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3dd104a40414a4c184f292042d98c34c4f2a081a0be74523d1c7da92597f487,PodSandboxId:12b80a1b9425668e9ab47d88685121897996822623edab194b9fb1ac50d00fc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721177800740417213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7861b2bf7bfee994fa80936b6d31f5ccf7960c4e0a63bd4312af282fca47083f,PodSandboxId:f692177694e3d7c7e6a87ec2b166f6cac56503094e7e2649eca372934922baf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800682850306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d7b223099b0137ef8cae73ccac8ee50e2f9c818656cb2c2674f8d8d1514fd0,PodSandboxId:b7cdeb95f25db3f6f2503c8abf76267b222caacd21d0b1a8e67e34b512229584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800522460835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcb54666e9d9613e1f68d0f760cc3e0c95b06e2fe3747f5a97c72ba0a238d16,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177800526688585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-cont
roller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effa58e46e219c34739e40820dbe2178f075c45e540541619e339690fa06184,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177800508238100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-0291
13,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:900f665d54096f7765fdf4465f1760855d73180627d403518f43e27d98beda89,PodSandboxId:bec07272b68700204eb042d072de745e98b89e96c1ca645a5926e6ddb134986e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721177800296153947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7221f2622be960846131245cf8a060743228bd2fb4267445a85ad2461e84f042,PodSandboxId:30c1fcbc2ced4efc64c53697813def8a44ad5787e4d0059a9098e5a0ee6315d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177800493353171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8,PodSandboxId:664b36c878574a28b197c9c401e29493c78b7a534fc931c89a99b95559ad0030,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177800219412908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177307303963678,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077742775453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kuber
netes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077777282664,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721177065689995452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177060624682903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177041034280990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b94
0,State:CONTAINER_EXITED,CreatedAt:1721177040986220008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40fa6976-28c7-4807-a1b6-db4b71b12c0b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7af51385090a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      44 seconds ago       Running             storage-provisioner       4                   aeeb4918aaf73       storage-provisioner
	cbcd737b5deaa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   aeeb4918aaf73       storage-provisioner
	a549df20ed996       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   2                   d800735392b66       kube-controller-manager-ha-029113
	6d7f98a847746       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            3                   b05df37c6d755       kube-apiserver-ha-029113
	d0474fb323a70       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   d9df264711dcb       busybox-fc5497c4f-pf5xn
	7ea42e5105cf6       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   e21bd8195ae4f       kube-vip-ha-029113
	d3dd104a40414       a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda                                      2 minutes ago        Running             kindnet-cni               1                   12b80a1b94256       kindnet-8xg7d
	7861b2bf7bfee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   f692177694e3d       coredns-7db6d8ff4d-62m67
	9dcb54666e9d9       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      2 minutes ago        Exited              kube-controller-manager   1                   d800735392b66       kube-controller-manager-ha-029113
	94d7b223099b0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   b7cdeb95f25db       coredns-7db6d8ff4d-xdlls
	4effa58e46e21       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      2 minutes ago        Exited              kube-apiserver            2                   b05df37c6d755       kube-apiserver-ha-029113
	7221f2622be96       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      2 minutes ago        Running             kube-scheduler            1                   30c1fcbc2ced4       kube-scheduler-ha-029113
	900f665d54096       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      2 minutes ago        Running             kube-proxy                1                   bec07272b6870       kube-proxy-hg2kp
	b25c762688412       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   664b36c878574       etcd-ha-029113
	cf4870ffc6ba7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   a45c7f17109af       busybox-fc5497c4f-pf5xn
	708012203a1a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   9323719ef6547       coredns-7db6d8ff4d-62m67
	0f3b600dde660       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   f8a5889bb1d2b       coredns-7db6d8ff4d-xdlls
	14ce89e605287       docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381    14 minutes ago       Exited              kindnet-cni               0                   a30304f1d93be       kindnet-8xg7d
	21b3cbbc53732       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      14 minutes ago       Exited              kube-proxy                0                   9fc93d7901e92       kube-proxy-hg2kp
	535a2b743f28f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   9dca109899a3f       etcd-ha-029113
	af1a2d97ac6f8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      15 minutes ago       Exited              kube-scheduler            0                   5eb5a4397caa3       kube-scheduler-ha-029113
	
	
	==> coredns [0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa] <==
	[INFO] 10.244.1.2:60685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000209662s
	[INFO] 10.244.1.2:59157 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00911229s
	[INFO] 10.244.0.4:33726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164542s
	[INFO] 10.244.0.4:35638 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107633s
	[INFO] 10.244.0.4:36083 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148234s
	[INFO] 10.244.0.4:49455 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157722s
	[INFO] 10.244.2.2:43892 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122973s
	[INFO] 10.244.2.2:45729 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013946s
	[INFO] 10.244.0.4:55198 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100375s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106412s
	[INFO] 10.244.0.4:37401 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124966s
	[INFO] 10.244.0.4:60799 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012109s
	[INFO] 10.244.2.2:34189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127044s
	[INFO] 10.244.2.2:42164 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116232s
	[INFO] 10.244.2.2:45045 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090238s
	[INFO] 10.244.1.2:51035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200282s
	[INFO] 10.244.1.2:55956 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000190607s
	[INFO] 10.244.1.2:54538 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177763s
	[INFO] 10.244.0.4:33888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013825s
	[INFO] 10.244.2.2:47245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251032s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc] <==
	[INFO] 10.244.0.4:59867 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056093s
	[INFO] 10.244.0.4:34082 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003321s
	[INFO] 10.244.2.2:43902 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001968463s
	[INFO] 10.244.2.2:54035 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207692s
	[INFO] 10.244.2.2:33997 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001225386s
	[INFO] 10.244.2.2:45029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109563s
	[INFO] 10.244.2.2:39017 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092433s
	[INFO] 10.244.2.2:54230 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169232s
	[INFO] 10.244.1.2:47885 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195059s
	[INFO] 10.244.1.2:52609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101063s
	[INFO] 10.244.1.2:45870 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090685s
	[INFO] 10.244.1.2:54516 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081368s
	[INFO] 10.244.2.2:33988 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080469s
	[INFO] 10.244.1.2:34772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000287318s
	[INFO] 10.244.0.4:35803 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085391s
	[INFO] 10.244.0.4:50190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000162301s
	[INFO] 10.244.0.4:40910 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130903s
	[INFO] 10.244.2.2:33875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129913s
	[INFO] 10.244.2.2:51223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090521s
	[INFO] 10.244.2.2:58679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073592s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7861b2bf7bfee994fa80936b6d31f5ccf7960c4e0a63bd4312af282fca47083f] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59206->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59206->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [94d7b223099b0137ef8cae73ccac8ee50e2f9c818656cb2c2674f8d8d1514fd0] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52300->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43352->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43352->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-029113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_44_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:44:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:58:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:57:27 +0000   Wed, 17 Jul 2024 00:44:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:57:27 +0000   Wed, 17 Jul 2024 00:44:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:57:27 +0000   Wed, 17 Jul 2024 00:44:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:57:27 +0000   Wed, 17 Jul 2024 00:44:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-029113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a51546f0529f4ddaa3a150daaabbe791
	  System UUID:                a51546f0-529f-4dda-a3a1-50daaabbe791
	  Boot ID:                    644e2f47-3b52-421d-bf4d-394d43757773
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pf5xn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-62m67             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-xdlls             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-029113                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-8xg7d                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-029113             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-029113    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-hg2kp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-029113             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-029113                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   Starting                 105s                 kube-proxy       
	  Normal   NodeHasNoDiskPressure    15m                  kubelet          Node ha-029113 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m                  kubelet          Node ha-029113 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                  kubelet          Node ha-029113 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal   NodeReady                14m                  kubelet          Node ha-029113 status is now: NodeReady
	  Normal   RegisteredNode           12m                  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Warning  ContainerGCFailed        3m2s (x2 over 4m2s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           96s                  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal   RegisteredNode           92s                  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal   RegisteredNode           34s                  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	
	
	Name:               ha-029113-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_46_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:59:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:58:11 +0000   Wed, 17 Jul 2024 00:57:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:58:11 +0000   Wed, 17 Jul 2024 00:57:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:58:11 +0000   Wed, 17 Jul 2024 00:57:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:58:11 +0000   Wed, 17 Jul 2024 00:57:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.166
	  Hostname:    ha-029113-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 caba57241163431db23fb698d4481f00
	  System UUID:                caba5724-1163-431d-b23f-b698d4481f00
	  Boot ID:                    1016b632-988b-43a4-974f-76e3427c540b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l4ctd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-029113-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-k7vzq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-029113-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-029113-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-2wz5p                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-029113-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-029113-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 98s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-029113-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-029113-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-029113-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  NodeNotReady             8m47s                node-controller  Node ha-029113-m02 status is now: NodeNotReady
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m7s)  kubelet          Node ha-029113-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m7s)  kubelet          Node ha-029113-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m7s)  kubelet          Node ha-029113-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           96s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           92s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           34s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	
	
	Name:               ha-029113-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_47_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:47:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:59:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:58:40 +0000   Wed, 17 Jul 2024 00:47:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:58:40 +0000   Wed, 17 Jul 2024 00:47:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:58:40 +0000   Wed, 17 Jul 2024 00:47:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:58:40 +0000   Wed, 17 Jul 2024 00:48:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-029113-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2e1b2e5e3744938b38fb857e0123a96
	  System UUID:                d2e1b2e5-e374-4938-b38f-b857e0123a96
	  Boot ID:                    8d41bf49-c8bd-407e-a6e5-6b31dfa10e84
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w8w7k                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-029113-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-k2jgh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-029113-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-029113-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-pfdt9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-029113-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-029113-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-029113-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-029113-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-029113-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	  Normal   RegisteredNode           92s                node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node ha-029113-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node ha-029113-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node ha-029113-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s                kubelet          Node ha-029113-m03 has been rebooted, boot id: 8d41bf49-c8bd-407e-a6e5-6b31dfa10e84
	  Normal   RegisteredNode           34s                node-controller  Node ha-029113-m03 event: Registered Node ha-029113-m03 in Controller
	
	
	Name:               ha-029113-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_49_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:49:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:59:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:59:01 +0000   Wed, 17 Jul 2024 00:59:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:59:01 +0000   Wed, 17 Jul 2024 00:59:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:59:01 +0000   Wed, 17 Jul 2024 00:59:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:59:01 +0000   Wed, 17 Jul 2024 00:59:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-029113-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6434efc175e64e719bbbb464b6a52834
	  System UUID:                6434efc1-75e6-4e71-9bbb-b464b6a52834
	  Boot ID:                    74c0a9ad-c535-431c-82ea-b31b0cc95fdd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8d2dk       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-m559l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 9m59s              kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-029113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-029113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-029113-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   NodeReady                9m44s              kubelet          Node ha-029113-m04 status is now: NodeReady
	  Normal   RegisteredNode           96s                node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   RegisteredNode           92s                node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   NodeNotReady             55s                node-controller  Node ha-029113-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           34s                node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-029113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-029113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-029113-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-029113-m04 has been rebooted, boot id: 74c0a9ad-c535-431c-82ea-b31b0cc95fdd
	  Normal   NodeReady                8s (x2 over 8s)    kubelet          Node ha-029113-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +4.559408] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.079107] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.061657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066531] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.165658] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.134006] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.290239] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.155347] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.001195] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.055206] kauditd_printk_skb: 158 callbacks suppressed
	[Jul17 00:44] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.000074] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +6.568819] kauditd_printk_skb: 23 callbacks suppressed
	[ +12.108545] kauditd_printk_skb: 29 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 26 callbacks suppressed
	[Jul17 00:56] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.146852] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.167572] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.135587] systemd-fstab-generator[3682]: Ignoring "noauto" option for root device
	[  +0.292587] systemd-fstab-generator[3710]: Ignoring "noauto" option for root device
	[  +0.777452] systemd-fstab-generator[3810]: Ignoring "noauto" option for root device
	[ +16.891540] kauditd_printk_skb: 217 callbacks suppressed
	[Jul17 00:57] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11] <==
	{"level":"warn","ts":"2024-07-17T00:55:06.297665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"643.905507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-17T00:55:06.303164Z","caller":"traceutil/trace.go:171","msg":"trace[810503239] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; }","duration":"649.416567ms","start":"2024-07-17T00:55:05.653736Z","end":"2024-07-17T00:55:06.303153Z","steps":["trace[810503239] 'agreement among raft nodes before linearized reading'  (duration: 643.925132ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:55:06.303202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:55:05.653725Z","time spent":"649.468292ms","remote":"127.0.0.1:35806","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 "}
	2024/07/17 00:55:06 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:55:06.321988Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6455787737204793371,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-17T00:55:06.433985Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.95:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:55:06.43407Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.95:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T00:55:06.434372Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a71e7bac075997","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-17T00:55:06.434746Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.434888Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.434941Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.435016Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.435081Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.435171Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.435219Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.435244Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435298Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435346Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435411Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435462Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435579Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435608Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.438237Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-07-17T00:55:06.438393Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-07-17T00:55:06.438431Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-029113","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"]}
	
	
	==> etcd [b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8] <==
	{"level":"warn","ts":"2024-07-17T00:58:04.705563Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:58:04.80516Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:58:04.849719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:58:04.905501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:58:05.00556Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a71e7bac075997","from":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T00:58:05.096778Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.100:2380/version","remote-member-id":"dae0f4ef8a06525b","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:05.096904Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"dae0f4ef8a06525b","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:06.33606Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dae0f4ef8a06525b","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:06.338285Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dae0f4ef8a06525b","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:09.098616Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.100:2380/version","remote-member-id":"dae0f4ef8a06525b","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:09.098677Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"dae0f4ef8a06525b","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:11.337047Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dae0f4ef8a06525b","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:11.339313Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dae0f4ef8a06525b","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:13.100978Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.100:2380/version","remote-member-id":"dae0f4ef8a06525b","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:13.101039Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"dae0f4ef8a06525b","error":"Get \"https://192.168.39.100:2380/version\": dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-17T00:58:15.316065Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a71e7bac075997","to":"dae0f4ef8a06525b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-17T00:58:15.316109Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:58:15.316131Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:58:15.323985Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:58:15.324289Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:58:15.327541Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a71e7bac075997","to":"dae0f4ef8a06525b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-17T00:58:15.327672Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"warn","ts":"2024-07-17T00:58:16.337272Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dae0f4ef8a06525b","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:16.339457Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dae0f4ef8a06525b","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-17T00:58:18.266379Z","caller":"traceutil/trace.go:171","msg":"trace[1916172587] transaction","detail":"{read_only:false; response_revision:2469; number_of_response:1; }","duration":"149.837891ms","start":"2024-07-17T00:58:18.116512Z","end":"2024-07-17T00:58:18.26635Z","steps":["trace[1916172587] 'process raft request'  (duration: 149.73848ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:59:09 up 15 min,  0 users,  load average: 0.68, 0.49, 0.30
	Linux ha-029113 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e] <==
	I0717 00:54:36.816483       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:54:36.816533       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:54:36.817701       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:54:36.817750       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:54:36.817913       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:54:36.817937       1 main.go:303] handling current node
	I0717 00:54:36.817949       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:54:36.817955       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:54:46.821425       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:54:46.821481       1 main.go:303] handling current node
	I0717 00:54:46.821501       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:54:46.821507       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:54:46.821645       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:54:46.821669       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:54:46.821716       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:54:46.821735       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:54:56.821109       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:54:56.821201       1 main.go:303] handling current node
	I0717 00:54:56.821228       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:54:56.821245       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:54:56.821394       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:54:56.821415       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:54:56.821471       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:54:56.821488       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	E0717 00:55:04.337495       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [d3dd104a40414a4c184f292042d98c34c4f2a081a0be74523d1c7da92597f487] <==
	I0717 00:58:32.117765       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:58:42.117315       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:58:42.117368       1 main.go:303] handling current node
	I0717 00:58:42.117386       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:58:42.117394       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:58:42.117598       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:58:42.117632       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:58:42.117772       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:58:42.117929       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:58:52.125174       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:58:52.125336       1 main.go:303] handling current node
	I0717 00:58:52.125379       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:58:52.125408       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:58:52.125574       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:58:52.125606       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:58:52.125731       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:58:52.125767       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:59:02.116592       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:59:02.116741       1 main.go:303] handling current node
	I0717 00:59:02.116785       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:59:02.116877       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:59:02.117052       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:59:02.117076       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:59:02.117191       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:59:02.117220       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4effa58e46e219c34739e40820dbe2178f075c45e540541619e339690fa06184] <==
	I0717 00:56:41.243616       1 options.go:221] external host was not specified, using 192.168.39.95
	I0717 00:56:41.249139       1 server.go:148] Version: v1.30.2
	I0717 00:56:41.249451       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:56:41.854341       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 00:56:41.858892       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 00:56:41.862267       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 00:56:41.862342       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 00:56:41.862520       1 instance.go:299] Using reconciler: lease
	W0717 00:57:01.852194       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 00:57:01.852194       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 00:57:01.864009       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 00:57:01.864133       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [6d7f98a847746b8a00d932181e11b880825032ec553e6de48f6736a756f14df2] <==
	I0717 00:57:24.799733       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0717 00:57:24.799766       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0717 00:57:24.846914       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 00:57:24.850523       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 00:57:24.850566       1 policy_source.go:224] refreshing policies
	I0717 00:57:24.865394       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 00:57:24.881576       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 00:57:24.881635       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 00:57:24.881983       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 00:57:24.882021       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 00:57:24.882027       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 00:57:24.882151       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 00:57:24.886557       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 00:57:24.886606       1 aggregator.go:165] initial CRD sync complete...
	I0717 00:57:24.886633       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 00:57:24.886658       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 00:57:24.886666       1 cache.go:39] Caches are synced for autoregister controller
	I0717 00:57:24.887861       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0717 00:57:24.894139       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.166]
	I0717 00:57:24.895402       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:57:24.901899       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 00:57:24.905059       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 00:57:25.788738       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 00:57:26.220565       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.166 192.168.39.95]
	W0717 00:57:36.220005       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.166 192.168.39.95]
	
	
	==> kube-controller-manager [9dcb54666e9d9613e1f68d0f760cc3e0c95b06e2fe3747f5a97c72ba0a238d16] <==
	I0717 00:56:41.978420       1 serving.go:380] Generated self-signed cert in-memory
	I0717 00:56:42.335650       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 00:56:42.335738       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:56:42.337619       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 00:56:42.337923       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 00:56:42.338064       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 00:56:42.338197       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0717 00:57:02.872443       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.95:8443/healthz\": dial tcp 192.168.39.95:8443: connect: connection refused"
	
	
	==> kube-controller-manager [a549df20ed996b85dfd4a228aafa38755eb325c2b8eea9d194a2f39a43c6f997] <==
	I0717 00:57:37.461022       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0717 00:57:37.461556       1 shared_informer.go:320] Caches are synced for service account
	I0717 00:57:37.464127       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 00:57:37.506875       1 shared_informer.go:320] Caches are synced for deployment
	I0717 00:57:37.521351       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0717 00:57:37.521516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.136µs"
	I0717 00:57:37.521729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="168.488µs"
	I0717 00:57:37.602972       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 00:57:37.616892       1 shared_informer.go:320] Caches are synced for disruption
	I0717 00:57:37.633163       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:57:37.677659       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:57:38.067309       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:57:38.096026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:57:38.096111       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 00:57:47.602411       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mgh88 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mgh88\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 00:57:47.602848       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"2dfdc45f-01de-43ca-8e95-5df7519203dd", APIVersion:"v1", ResourceVersion:"243", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mgh88 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mgh88": the object has been modified; please apply your changes to the latest version and try again
	I0717 00:57:47.636787       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.945371ms"
	I0717 00:57:47.639590       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.343385ms"
	I0717 00:57:47.683879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.478415ms"
	I0717 00:57:47.684114       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.402µs"
	I0717 00:58:11.579418       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.329302ms"
	I0717 00:58:11.579714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.611µs"
	I0717 00:58:33.081689       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.554175ms"
	I0717 00:58:33.081902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.374µs"
	I0717 00:59:01.153016       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-029113-m04"
	
	
	==> kube-proxy [21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909] <==
	E0717 00:53:51.144186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:53:51.144150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:53:51.144279       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:53:51.144315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:53:51.144293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:53:57.800237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:53:57.800311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:53:57.800488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:53:57.800553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:53:57.800637       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:53:57.800716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:07.017417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:07.017417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:07.017636       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:07.017664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:10.089783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:10.089893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:25.448706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:25.448791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:28.520441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:28.521183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:28.521768       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:28.522078       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:56.168404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:56.168486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [900f665d54096f7765fdf4465f1760855d73180627d403518f43e27d98beda89] <==
	E0717 00:57:05.193130       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-029113\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 00:57:23.643893       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-029113\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0717 00:57:23.644616       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0717 00:57:23.902466       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:57:23.902681       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:57:23.902789       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:57:23.924091       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:57:23.926207       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:57:23.926694       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:57:23.928583       1 config.go:192] "Starting service config controller"
	I0717 00:57:23.928718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:57:23.928959       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:57:23.928984       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:57:23.930667       1 config.go:319] "Starting node config controller"
	I0717 00:57:23.930705       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0717 00:57:26.696774       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0717 00:57:26.697325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:57:26.698174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:57:26.698343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:57:26.698438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:57:26.698568       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:57:26.698649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0717 00:57:28.029701       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:57:28.031012       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:57:28.329989       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7221f2622be960846131245cf8a060743228bd2fb4267445a85ad2461e84f042] <==
	W0717 00:57:19.618934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.95:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:19.618974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.95:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:19.704178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.95:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:19.704242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.95:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:20.007647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.95:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:20.007700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.95:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:20.264463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:20.264565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:20.361500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.95:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:20.361564       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.95:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:20.460543       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.95:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:20.460656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.95:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:21.539328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:21.539446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:21.699157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:21.699219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:21.803445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.95:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:21.803545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.95:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:24.803379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:57:24.803473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:57:24.803559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:57:24.803591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:57:24.803656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:57:24.803686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0717 00:57:38.180937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85] <==
	W0717 00:54:58.688887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:54:58.688936       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:54:58.792253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:58.792349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:58.816578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:54:58.816626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:54:58.909115       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:54:58.909192       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:54:59.016064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:59.016109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:59.141646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:59.141779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:59.142680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:54:59.142735       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:54:59.374053       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:59.374105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:55:00.215574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:55:00.215643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:55:04.725028       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:55:04.725134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:55:04.818347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:55:04.818400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:55:04.881210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:55:04.881314       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:55:06.264201       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 00:57:27 ha-029113 kubelet[1354]: I0717 00:57:27.447133    1354 scope.go:117] "RemoveContainer" containerID="75ed518842ab371ee3448da5e08a29dd6c56f28d0c0e69194d047a4eaf0f8153"
	Jul 17 00:57:31 ha-029113 kubelet[1354]: I0717 00:57:31.220515    1354 scope.go:117] "RemoveContainer" containerID="75ed518842ab371ee3448da5e08a29dd6c56f28d0c0e69194d047a4eaf0f8153"
	Jul 17 00:57:31 ha-029113 kubelet[1354]: I0717 00:57:31.221000    1354 scope.go:117] "RemoveContainer" containerID="cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835"
	Jul 17 00:57:31 ha-029113 kubelet[1354]: E0717 00:57:31.221396    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b9f04e5d-469e-4432-bd31-dbe772194f84)\"" pod="kube-system/storage-provisioner" podUID="b9f04e5d-469e-4432-bd31-dbe772194f84"
	Jul 17 00:57:42 ha-029113 kubelet[1354]: I0717 00:57:42.434718    1354 scope.go:117] "RemoveContainer" containerID="cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835"
	Jul 17 00:57:42 ha-029113 kubelet[1354]: E0717 00:57:42.435392    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b9f04e5d-469e-4432-bd31-dbe772194f84)\"" pod="kube-system/storage-provisioner" podUID="b9f04e5d-469e-4432-bd31-dbe772194f84"
	Jul 17 00:57:43 ha-029113 kubelet[1354]: I0717 00:57:43.739363    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-pf5xn" podStartSLOduration=557.914151392 podStartE2EDuration="9m21.739331638s" podCreationTimestamp="2024-07-17 00:48:22 +0000 UTC" firstStartedPulling="2024-07-17 00:48:23.464719016 +0000 UTC m=+256.200444672" lastFinishedPulling="2024-07-17 00:48:27.289899277 +0000 UTC m=+260.025624918" observedRunningTime="2024-07-17 00:48:27.488761248 +0000 UTC m=+260.224486911" watchObservedRunningTime="2024-07-17 00:57:43.739331638 +0000 UTC m=+816.475057300"
	Jul 17 00:57:57 ha-029113 kubelet[1354]: I0717 00:57:57.435764    1354 scope.go:117] "RemoveContainer" containerID="cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835"
	Jul 17 00:57:57 ha-029113 kubelet[1354]: E0717 00:57:57.437613    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b9f04e5d-469e-4432-bd31-dbe772194f84)\"" pod="kube-system/storage-provisioner" podUID="b9f04e5d-469e-4432-bd31-dbe772194f84"
	Jul 17 00:58:03 ha-029113 kubelet[1354]: I0717 00:58:03.435308    1354 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-029113" podUID="985763eb-2a45-4820-a3db-e2af6d9291e0"
	Jul 17 00:58:03 ha-029113 kubelet[1354]: I0717 00:58:03.462773    1354 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-029113"
	Jul 17 00:58:07 ha-029113 kubelet[1354]: I0717 00:58:07.457976    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-029113" podStartSLOduration=4.457947229 podStartE2EDuration="4.457947229s" podCreationTimestamp="2024-07-17 00:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 00:58:07.456290653 +0000 UTC m=+840.192016318" watchObservedRunningTime="2024-07-17 00:58:07.457947229 +0000 UTC m=+840.193672891"
	Jul 17 00:58:07 ha-029113 kubelet[1354]: E0717 00:58:07.513219    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:58:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:58:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:58:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:58:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 00:58:09 ha-029113 kubelet[1354]: I0717 00:58:09.435536    1354 scope.go:117] "RemoveContainer" containerID="cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835"
	Jul 17 00:58:09 ha-029113 kubelet[1354]: E0717 00:58:09.435697    1354 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b9f04e5d-469e-4432-bd31-dbe772194f84)\"" pod="kube-system/storage-provisioner" podUID="b9f04e5d-469e-4432-bd31-dbe772194f84"
	Jul 17 00:58:24 ha-029113 kubelet[1354]: I0717 00:58:24.435146    1354 scope.go:117] "RemoveContainer" containerID="cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835"
	Jul 17 00:59:07 ha-029113 kubelet[1354]: E0717 00:59:07.513467    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 00:59:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 00:59:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 00:59:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 00:59:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 00:59:08.091179   31311 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19264-3908/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-029113 -n ha-029113
helpers_test.go:261: (dbg) Run:  kubectl --context ha-029113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 stop -v=7 --alsologtostderr
E0717 01:00:17.179455   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 stop -v=7 --alsologtostderr: exit status 82 (2m0.466254978s)

                                                
                                                
-- stdout --
	* Stopping node "ha-029113-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:59:27.726950   31724 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:59:27.727038   31724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:59:27.727047   31724 out.go:304] Setting ErrFile to fd 2...
	I0717 00:59:27.727051   31724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:59:27.727213   31724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:59:27.727416   31724 out.go:298] Setting JSON to false
	I0717 00:59:27.727489   31724 mustload.go:65] Loading cluster: ha-029113
	I0717 00:59:27.727821   31724 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:59:27.727898   31724 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:59:27.728060   31724 mustload.go:65] Loading cluster: ha-029113
	I0717 00:59:27.728177   31724 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:59:27.728197   31724 stop.go:39] StopHost: ha-029113-m04
	I0717 00:59:27.728510   31724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:59:27.728554   31724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:59:27.743017   31724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42287
	I0717 00:59:27.743494   31724 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:59:27.744126   31724 main.go:141] libmachine: Using API Version  1
	I0717 00:59:27.744160   31724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:59:27.744519   31724 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:59:27.746900   31724 out.go:177] * Stopping node "ha-029113-m04"  ...
	I0717 00:59:27.748200   31724 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 00:59:27.748253   31724 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 00:59:27.748480   31724 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 00:59:27.748508   31724 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 00:59:27.751359   31724 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:59:27.751815   31724 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:58:55 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 00:59:27.751844   31724 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 00:59:27.751994   31724 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 00:59:27.752139   31724 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 00:59:27.752267   31724 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 00:59:27.752391   31724 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	I0717 00:59:27.837956   31724 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 00:59:27.891159   31724 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 00:59:27.944527   31724 main.go:141] libmachine: Stopping "ha-029113-m04"...
	I0717 00:59:27.944562   31724 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 00:59:27.946100   31724 main.go:141] libmachine: (ha-029113-m04) Calling .Stop
	I0717 00:59:27.949454   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 0/120
	I0717 00:59:28.950935   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 1/120
	I0717 00:59:29.952212   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 2/120
	I0717 00:59:30.953584   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 3/120
	I0717 00:59:31.955027   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 4/120
	I0717 00:59:32.957033   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 5/120
	I0717 00:59:33.958399   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 6/120
	I0717 00:59:34.960888   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 7/120
	I0717 00:59:35.962081   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 8/120
	I0717 00:59:36.964037   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 9/120
	I0717 00:59:37.966033   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 10/120
	I0717 00:59:38.967551   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 11/120
	I0717 00:59:39.968795   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 12/120
	I0717 00:59:40.969952   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 13/120
	I0717 00:59:41.972064   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 14/120
	I0717 00:59:42.973605   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 15/120
	I0717 00:59:43.975301   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 16/120
	I0717 00:59:44.976730   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 17/120
	I0717 00:59:45.978346   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 18/120
	I0717 00:59:46.979572   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 19/120
	I0717 00:59:47.981760   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 20/120
	I0717 00:59:48.983222   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 21/120
	I0717 00:59:49.984617   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 22/120
	I0717 00:59:50.985842   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 23/120
	I0717 00:59:51.987199   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 24/120
	I0717 00:59:52.988741   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 25/120
	I0717 00:59:53.990133   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 26/120
	I0717 00:59:54.991941   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 27/120
	I0717 00:59:55.993255   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 28/120
	I0717 00:59:56.994520   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 29/120
	I0717 00:59:57.995741   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 30/120
	I0717 00:59:58.997060   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 31/120
	I0717 00:59:59.998598   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 32/120
	I0717 01:00:00.999843   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 33/120
	I0717 01:00:02.001853   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 34/120
	I0717 01:00:03.003706   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 35/120
	I0717 01:00:04.006081   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 36/120
	I0717 01:00:05.007496   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 37/120
	I0717 01:00:06.008877   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 38/120
	I0717 01:00:07.011165   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 39/120
	I0717 01:00:08.013045   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 40/120
	I0717 01:00:09.014260   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 41/120
	I0717 01:00:10.015692   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 42/120
	I0717 01:00:11.016886   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 43/120
	I0717 01:00:12.018310   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 44/120
	I0717 01:00:13.019851   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 45/120
	I0717 01:00:14.021412   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 46/120
	I0717 01:00:15.023628   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 47/120
	I0717 01:00:16.025618   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 48/120
	I0717 01:00:17.027029   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 49/120
	I0717 01:00:18.029265   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 50/120
	I0717 01:00:19.030587   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 51/120
	I0717 01:00:20.032691   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 52/120
	I0717 01:00:21.034272   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 53/120
	I0717 01:00:22.035750   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 54/120
	I0717 01:00:23.037587   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 55/120
	I0717 01:00:24.039472   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 56/120
	I0717 01:00:25.040915   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 57/120
	I0717 01:00:26.042362   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 58/120
	I0717 01:00:27.044217   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 59/120
	I0717 01:00:28.046209   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 60/120
	I0717 01:00:29.047457   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 61/120
	I0717 01:00:30.049027   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 62/120
	I0717 01:00:31.050459   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 63/120
	I0717 01:00:32.051956   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 64/120
	I0717 01:00:33.054237   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 65/120
	I0717 01:00:34.055538   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 66/120
	I0717 01:00:35.057254   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 67/120
	I0717 01:00:36.059215   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 68/120
	I0717 01:00:37.061106   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 69/120
	I0717 01:00:38.063377   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 70/120
	I0717 01:00:39.065441   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 71/120
	I0717 01:00:40.066849   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 72/120
	I0717 01:00:41.068253   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 73/120
	I0717 01:00:42.069588   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 74/120
	I0717 01:00:43.071642   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 75/120
	I0717 01:00:44.073203   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 76/120
	I0717 01:00:45.074965   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 77/120
	I0717 01:00:46.076907   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 78/120
	I0717 01:00:47.078257   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 79/120
	I0717 01:00:48.080069   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 80/120
	I0717 01:00:49.081330   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 81/120
	I0717 01:00:50.082645   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 82/120
	I0717 01:00:51.084676   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 83/120
	I0717 01:00:52.086102   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 84/120
	I0717 01:00:53.087898   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 85/120
	I0717 01:00:54.089143   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 86/120
	I0717 01:00:55.090586   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 87/120
	I0717 01:00:56.091924   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 88/120
	I0717 01:00:57.093465   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 89/120
	I0717 01:00:58.095428   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 90/120
	I0717 01:00:59.096685   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 91/120
	I0717 01:01:00.097986   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 92/120
	I0717 01:01:01.099281   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 93/120
	I0717 01:01:02.100588   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 94/120
	I0717 01:01:03.102565   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 95/120
	I0717 01:01:04.104232   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 96/120
	I0717 01:01:05.106137   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 97/120
	I0717 01:01:06.107468   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 98/120
	I0717 01:01:07.108914   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 99/120
	I0717 01:01:08.110841   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 100/120
	I0717 01:01:09.113029   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 101/120
	I0717 01:01:10.114205   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 102/120
	I0717 01:01:11.116365   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 103/120
	I0717 01:01:12.118097   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 104/120
	I0717 01:01:13.119958   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 105/120
	I0717 01:01:14.121313   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 106/120
	I0717 01:01:15.122636   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 107/120
	I0717 01:01:16.123806   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 108/120
	I0717 01:01:17.125291   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 109/120
	I0717 01:01:18.127537   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 110/120
	I0717 01:01:19.128907   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 111/120
	I0717 01:01:20.130360   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 112/120
	I0717 01:01:21.132459   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 113/120
	I0717 01:01:22.133844   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 114/120
	I0717 01:01:23.135921   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 115/120
	I0717 01:01:24.137543   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 116/120
	I0717 01:01:25.138855   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 117/120
	I0717 01:01:26.141255   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 118/120
	I0717 01:01:27.142780   31724 main.go:141] libmachine: (ha-029113-m04) Waiting for machine to stop 119/120
	I0717 01:01:28.144239   31724 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 01:01:28.144300   31724 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 01:01:28.146208   31724 out.go:177] 
	W0717 01:01:28.147749   31724 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 01:01:28.147763   31724 out.go:239] * 
	* 
	W0717 01:01:28.150029   31724 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:01:28.151267   31724 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-029113 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
E0717 01:01:40.227765   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr: exit status 3 (18.946233257s)

                                                
                                                
-- stdout --
	ha-029113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-029113-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:01:28.196324   32147 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:01:28.196418   32147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:01:28.196426   32147 out.go:304] Setting ErrFile to fd 2...
	I0717 01:01:28.196430   32147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:01:28.196607   32147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:01:28.196760   32147 out.go:298] Setting JSON to false
	I0717 01:01:28.196785   32147 mustload.go:65] Loading cluster: ha-029113
	I0717 01:01:28.196902   32147 notify.go:220] Checking for updates...
	I0717 01:01:28.197152   32147 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:01:28.197166   32147 status.go:255] checking status of ha-029113 ...
	I0717 01:01:28.197495   32147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:01:28.197531   32147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:01:28.213315   32147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
	I0717 01:01:28.213743   32147 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:01:28.214379   32147 main.go:141] libmachine: Using API Version  1
	I0717 01:01:28.214405   32147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:01:28.214814   32147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:01:28.214997   32147 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 01:01:28.216636   32147 status.go:330] ha-029113 host status = "Running" (err=<nil>)
	I0717 01:01:28.216651   32147 host.go:66] Checking if "ha-029113" exists ...
	I0717 01:01:28.216934   32147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:01:28.216973   32147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:01:28.231073   32147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44337
	I0717 01:01:28.231394   32147 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:01:28.231878   32147 main.go:141] libmachine: Using API Version  1
	I0717 01:01:28.231905   32147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:01:28.232224   32147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:01:28.232407   32147 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 01:01:28.235306   32147 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 01:01:28.235804   32147 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 01:01:28.235828   32147 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 01:01:28.235960   32147 host.go:66] Checking if "ha-029113" exists ...
	I0717 01:01:28.236280   32147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:01:28.236316   32147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:01:28.250819   32147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42927
	I0717 01:01:28.251170   32147 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:01:28.251573   32147 main.go:141] libmachine: Using API Version  1
	I0717 01:01:28.251601   32147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:01:28.251921   32147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:01:28.252078   32147 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 01:01:28.252243   32147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 01:01:28.252276   32147 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 01:01:28.254539   32147 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 01:01:28.255010   32147 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 01:01:28.255028   32147 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 01:01:28.255164   32147 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 01:01:28.255319   32147 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 01:01:28.255455   32147 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 01:01:28.255582   32147 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 01:01:28.336391   32147 ssh_runner.go:195] Run: systemctl --version
	I0717 01:01:28.344349   32147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:01:28.360589   32147 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 01:01:28.360615   32147 api_server.go:166] Checking apiserver status ...
	I0717 01:01:28.360642   32147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:01:28.376393   32147 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4954/cgroup
	W0717 01:01:28.386646   32147 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4954/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:01:28.386696   32147 ssh_runner.go:195] Run: ls
	I0717 01:01:28.391264   32147 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 01:01:28.398056   32147 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 01:01:28.398077   32147 status.go:422] ha-029113 apiserver status = Running (err=<nil>)
	I0717 01:01:28.398085   32147 status.go:257] ha-029113 status: &{Name:ha-029113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:01:28.398100   32147 status.go:255] checking status of ha-029113-m02 ...
	I0717 01:01:28.398383   32147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:01:28.398423   32147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:01:28.414133   32147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0717 01:01:28.414521   32147 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:01:28.414944   32147 main.go:141] libmachine: Using API Version  1
	I0717 01:01:28.414976   32147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:01:28.415269   32147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:01:28.415421   32147 main.go:141] libmachine: (ha-029113-m02) Calling .GetState
	I0717 01:01:28.417059   32147 status.go:330] ha-029113-m02 host status = "Running" (err=<nil>)
	I0717 01:01:28.417078   32147 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 01:01:28.417453   32147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:01:28.417516   32147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:01:28.431947   32147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40077
	I0717 01:01:28.432312   32147 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:01:28.432743   32147 main.go:141] libmachine: Using API Version  1
	I0717 01:01:28.432770   32147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:01:28.433089   32147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:01:28.433252   32147 main.go:141] libmachine: (ha-029113-m02) Calling .GetIP
	I0717 01:01:28.436145   32147 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 01:01:28.436501   32147 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:56:51 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 01:01:28.436533   32147 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 01:01:28.436672   32147 host.go:66] Checking if "ha-029113-m02" exists ...
	I0717 01:01:28.437074   32147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:01:28.437118   32147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:01:28.451037   32147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
	I0717 01:01:28.451444   32147 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:01:28.451883   32147 main.go:141] libmachine: Using API Version  1
	I0717 01:01:28.451905   32147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:01:28.452195   32147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:01:28.452351   32147 main.go:141] libmachine: (ha-029113-m02) Calling .DriverName
	I0717 01:01:28.452491   32147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 01:01:28.452513   32147 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHHostname
	I0717 01:01:28.455086   32147 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 01:01:28.455496   32147 main.go:141] libmachine: (ha-029113-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:08:5b", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:56:51 +0000 UTC Type:0 Mac:52:54:00:57:08:5b Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:ha-029113-m02 Clientid:01:52:54:00:57:08:5b}
	I0717 01:01:28.455514   32147 main.go:141] libmachine: (ha-029113-m02) DBG | domain ha-029113-m02 has defined IP address 192.168.39.166 and MAC address 52:54:00:57:08:5b in network mk-ha-029113
	I0717 01:01:28.455628   32147 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHPort
	I0717 01:01:28.455782   32147 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHKeyPath
	I0717 01:01:28.455914   32147 main.go:141] libmachine: (ha-029113-m02) Calling .GetSSHUsername
	I0717 01:01:28.456020   32147 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m02/id_rsa Username:docker}
	I0717 01:01:28.539832   32147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:01:28.558504   32147 kubeconfig.go:125] found "ha-029113" server: "https://192.168.39.254:8443"
	I0717 01:01:28.558528   32147 api_server.go:166] Checking apiserver status ...
	I0717 01:01:28.558582   32147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:01:28.577729   32147 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	W0717 01:01:28.587724   32147 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:01:28.587776   32147 ssh_runner.go:195] Run: ls
	I0717 01:01:28.592326   32147 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 01:01:28.596664   32147 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 01:01:28.596682   32147 status.go:422] ha-029113-m02 apiserver status = Running (err=<nil>)
	I0717 01:01:28.596690   32147 status.go:257] ha-029113-m02 status: &{Name:ha-029113-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:01:28.596712   32147 status.go:255] checking status of ha-029113-m04 ...
	I0717 01:01:28.596993   32147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:01:28.597051   32147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:01:28.612481   32147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45287
	I0717 01:01:28.612869   32147 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:01:28.613267   32147 main.go:141] libmachine: Using API Version  1
	I0717 01:01:28.613285   32147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:01:28.613550   32147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:01:28.613742   32147 main.go:141] libmachine: (ha-029113-m04) Calling .GetState
	I0717 01:01:28.615201   32147 status.go:330] ha-029113-m04 host status = "Running" (err=<nil>)
	I0717 01:01:28.615225   32147 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 01:01:28.615506   32147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:01:28.615563   32147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:01:28.630059   32147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0717 01:01:28.630491   32147 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:01:28.630957   32147 main.go:141] libmachine: Using API Version  1
	I0717 01:01:28.630979   32147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:01:28.631236   32147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:01:28.631382   32147 main.go:141] libmachine: (ha-029113-m04) Calling .GetIP
	I0717 01:01:28.633991   32147 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 01:01:28.634372   32147 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:58:55 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 01:01:28.634392   32147 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 01:01:28.634589   32147 host.go:66] Checking if "ha-029113-m04" exists ...
	I0717 01:01:28.634865   32147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:01:28.634896   32147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:01:28.649184   32147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46311
	I0717 01:01:28.649545   32147 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:01:28.649964   32147 main.go:141] libmachine: Using API Version  1
	I0717 01:01:28.649978   32147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:01:28.650242   32147 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:01:28.650490   32147 main.go:141] libmachine: (ha-029113-m04) Calling .DriverName
	I0717 01:01:28.650691   32147 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 01:01:28.650708   32147 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHHostname
	I0717 01:01:28.652923   32147 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 01:01:28.653270   32147 main.go:141] libmachine: (ha-029113-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a4:ba", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:58:55 +0000 UTC Type:0 Mac:52:54:00:be:a4:ba Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-029113-m04 Clientid:01:52:54:00:be:a4:ba}
	I0717 01:01:28.653295   32147 main.go:141] libmachine: (ha-029113-m04) DBG | domain ha-029113-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:be:a4:ba in network mk-ha-029113
	I0717 01:01:28.653444   32147 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHPort
	I0717 01:01:28.653587   32147 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHKeyPath
	I0717 01:01:28.653729   32147 main.go:141] libmachine: (ha-029113-m04) Calling .GetSSHUsername
	I0717 01:01:28.653864   32147 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113-m04/id_rsa Username:docker}
	W0717 01:01:47.098796   32147 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	W0717 01:01:47.098887   32147 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0717 01:01:47.098902   32147 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0717 01:01:47.098908   32147 status.go:257] ha-029113-m04 status: &{Name:ha-029113-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0717 01:01:47.098925   32147 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-029113 -n ha-029113
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-029113 logs -n 25: (1.670820896s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-029113 ssh -n ha-029113-m02 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m03_ha-029113-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04:/home/docker/cp-test_ha-029113-m03_ha-029113-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m04 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m03_ha-029113-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp testdata/cp-test.txt                                               | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile695400083/001/cp-test_ha-029113-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113:/home/docker/cp-test_ha-029113-m04_ha-029113.txt                      |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113 sudo cat                                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113.txt                                |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m02:/home/docker/cp-test_ha-029113-m04_ha-029113-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m02 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m03:/home/docker/cp-test_ha-029113-m04_ha-029113-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n                                                                | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | ha-029113-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-029113 ssh -n ha-029113-m03 sudo cat                                         | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC | 17 Jul 24 00:49 UTC |
	|         | /home/docker/cp-test_ha-029113-m04_ha-029113-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-029113 node stop m02 -v=7                                                    | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:49 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-029113 node start m02 -v=7                                                   | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:52 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-029113 -v=7                                                          | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:53 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-029113 -v=7                                                               | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:53 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-029113 --wait=true -v=7                                                   | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:55 UTC | 17 Jul 24 00:59 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-029113                                                               | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:59 UTC |                     |
	| node    | ha-029113 node delete m03 -v=7                                                  | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:59 UTC | 17 Jul 24 00:59 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-029113 stop -v=7                                                             | ha-029113 | jenkins | v1.33.1 | 17 Jul 24 00:59 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:55:05
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:55:05.383491   29983 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:55:05.383937   29983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:55:05.383950   29983 out.go:304] Setting ErrFile to fd 2...
	I0717 00:55:05.383957   29983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:55:05.384474   29983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:55:05.385384   29983 out.go:298] Setting JSON to false
	I0717 00:55:05.386429   29983 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2247,"bootTime":1721175458,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:55:05.386483   29983 start.go:139] virtualization: kvm guest
	I0717 00:55:05.388526   29983 out.go:177] * [ha-029113] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:55:05.389860   29983 notify.go:220] Checking for updates...
	I0717 00:55:05.389875   29983 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:55:05.391232   29983 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:55:05.392479   29983 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:55:05.394050   29983 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:55:05.395692   29983 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:55:05.397068   29983 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:55:05.398872   29983 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:55:05.398955   29983 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:55:05.399345   29983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:55:05.399394   29983 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:55:05.417832   29983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44409
	I0717 00:55:05.418300   29983 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:55:05.418850   29983 main.go:141] libmachine: Using API Version  1
	I0717 00:55:05.418869   29983 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:55:05.419197   29983 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:55:05.419361   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:55:05.453309   29983 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:55:05.454537   29983 start.go:297] selected driver: kvm2
	I0717 00:55:05.454563   29983 start.go:901] validating driver "kvm2" against &{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:55:05.454726   29983 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:55:05.455073   29983 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:55:05.455140   29983 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:55:05.469318   29983 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:55:05.469919   29983 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:55:05.469973   29983 cni.go:84] Creating CNI manager for ""
	I0717 00:55:05.469984   29983 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:55:05.470037   29983 start.go:340] cluster config:
	{Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:55:05.470149   29983 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:55:05.472712   29983 out.go:177] * Starting "ha-029113" primary control-plane node in "ha-029113" cluster
	I0717 00:55:05.474260   29983 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:55:05.474290   29983 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:55:05.474298   29983 cache.go:56] Caching tarball of preloaded images
	I0717 00:55:05.474389   29983 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:55:05.474401   29983 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:55:05.474514   29983 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/config.json ...
	I0717 00:55:05.474724   29983 start.go:360] acquireMachinesLock for ha-029113: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 00:55:05.474772   29983 start.go:364] duration metric: took 30.592µs to acquireMachinesLock for "ha-029113"
	I0717 00:55:05.474799   29983 start.go:96] Skipping create...Using existing machine configuration
	I0717 00:55:05.474808   29983 fix.go:54] fixHost starting: 
	I0717 00:55:05.475043   29983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:55:05.475075   29983 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:55:05.488771   29983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0717 00:55:05.489182   29983 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:55:05.489695   29983 main.go:141] libmachine: Using API Version  1
	I0717 00:55:05.489718   29983 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:55:05.490020   29983 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:55:05.490175   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:55:05.490319   29983 main.go:141] libmachine: (ha-029113) Calling .GetState
	I0717 00:55:05.491722   29983 fix.go:112] recreateIfNeeded on ha-029113: state=Running err=<nil>
	W0717 00:55:05.491744   29983 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 00:55:05.493806   29983 out.go:177] * Updating the running kvm2 "ha-029113" VM ...
	I0717 00:55:05.495165   29983 machine.go:94] provisionDockerMachine start ...
	I0717 00:55:05.495182   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:55:05.495368   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:05.497631   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.498052   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.498074   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.498267   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:55:05.498417   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.498568   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.498741   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:55:05.498897   29983 main.go:141] libmachine: Using SSH client type: native
	I0717 00:55:05.499078   29983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:55:05.499090   29983 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:55:05.603589   29983 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-029113
	
	I0717 00:55:05.603623   29983 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:55:05.603866   29983 buildroot.go:166] provisioning hostname "ha-029113"
	I0717 00:55:05.603890   29983 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:55:05.604063   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:05.606724   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.607141   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.607160   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.607311   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:55:05.607473   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.607619   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.607749   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:55:05.607904   29983 main.go:141] libmachine: Using SSH client type: native
	I0717 00:55:05.608046   29983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:55:05.608058   29983 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-029113 && echo "ha-029113" | sudo tee /etc/hostname
	I0717 00:55:05.724124   29983 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-029113
	
	I0717 00:55:05.724150   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:05.726891   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.727238   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.727264   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.727421   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:55:05.727637   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.727785   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.727926   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:55:05.728084   29983 main.go:141] libmachine: Using SSH client type: native
	I0717 00:55:05.728246   29983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:55:05.728274   29983 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-029113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-029113/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-029113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:55:05.827946   29983 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:55:05.827980   29983 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 00:55:05.828003   29983 buildroot.go:174] setting up certificates
	I0717 00:55:05.828014   29983 provision.go:84] configureAuth start
	I0717 00:55:05.828028   29983 main.go:141] libmachine: (ha-029113) Calling .GetMachineName
	I0717 00:55:05.828293   29983 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:55:05.831015   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.831537   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.831567   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.831745   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:05.833696   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.834021   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.834048   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.834146   29983 provision.go:143] copyHostCerts
	I0717 00:55:05.834174   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:55:05.834239   29983 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 00:55:05.834255   29983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 00:55:05.834338   29983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 00:55:05.834440   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:55:05.834464   29983 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 00:55:05.834470   29983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 00:55:05.834515   29983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 00:55:05.834606   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:55:05.834629   29983 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 00:55:05.834638   29983 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 00:55:05.834671   29983 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 00:55:05.834749   29983 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.ha-029113 san=[127.0.0.1 192.168.39.95 ha-029113 localhost minikube]
	I0717 00:55:05.974789   29983 provision.go:177] copyRemoteCerts
	I0717 00:55:05.974862   29983 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:55:05.974887   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:05.977324   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.977683   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:05.977711   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:05.977898   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:55:05.978088   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:05.978255   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:55:05.978391   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:55:06.057496   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 00:55:06.057564   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:55:06.088795   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 00:55:06.088853   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:55:06.114723   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 00:55:06.114780   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 00:55:06.146652   29983 provision.go:87] duration metric: took 318.61965ms to configureAuth
	I0717 00:55:06.146681   29983 buildroot.go:189] setting minikube options for container-runtime
	I0717 00:55:06.146923   29983 config.go:182] Loaded profile config "ha-029113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:55:06.147010   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:55:06.149622   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:06.149996   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:55:06.150020   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:55:06.150195   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:55:06.150397   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:06.150573   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:55:06.150709   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:55:06.150869   29983 main.go:141] libmachine: Using SSH client type: native
	I0717 00:55:06.151033   29983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:55:06.151051   29983 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:56:36.922337   29983 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:56:36.922366   29983 machine.go:97] duration metric: took 1m31.427187344s to provisionDockerMachine
	I0717 00:56:36.922378   29983 start.go:293] postStartSetup for "ha-029113" (driver="kvm2")
	I0717 00:56:36.922388   29983 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:56:36.922401   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:36.922709   29983 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:56:36.922731   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:56:36.925696   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:36.926069   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:36.926098   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:36.926198   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:56:36.926367   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:36.926528   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:56:36.926646   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:56:37.005605   29983 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:56:37.009755   29983 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 00:56:37.009775   29983 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 00:56:37.009823   29983 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 00:56:37.009894   29983 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 00:56:37.009903   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /etc/ssl/certs/112592.pem
	I0717 00:56:37.010000   29983 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 00:56:37.019312   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:56:37.047579   29983 start.go:296] duration metric: took 125.18763ms for postStartSetup
	I0717 00:56:37.047619   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:37.047920   29983 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 00:56:37.047943   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:56:37.050582   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.051006   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:37.051029   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.051189   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:56:37.051377   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:37.051537   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:56:37.051697   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	W0717 00:56:37.133080   29983 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0717 00:56:37.133102   29983 fix.go:56] duration metric: took 1m31.65829482s for fixHost
	I0717 00:56:37.133125   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:56:37.135706   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.136075   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:37.136103   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.136215   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:56:37.136405   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:37.136575   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:37.136723   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:56:37.136882   29983 main.go:141] libmachine: Using SSH client type: native
	I0717 00:56:37.137108   29983 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0717 00:56:37.137125   29983 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 00:56:37.251355   29983 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177797.223322601
	
	I0717 00:56:37.251392   29983 fix.go:216] guest clock: 1721177797.223322601
	I0717 00:56:37.251401   29983 fix.go:229] Guest: 2024-07-17 00:56:37.223322601 +0000 UTC Remote: 2024-07-17 00:56:37.133109222 +0000 UTC m=+91.782309028 (delta=90.213379ms)
	I0717 00:56:37.251434   29983 fix.go:200] guest clock delta is within tolerance: 90.213379ms
	I0717 00:56:37.251439   29983 start.go:83] releasing machines lock for "ha-029113", held for 1m31.776656084s
	I0717 00:56:37.251461   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:37.251716   29983 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:56:37.254471   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.254864   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:37.254881   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.255059   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:37.255616   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:37.255785   29983 main.go:141] libmachine: (ha-029113) Calling .DriverName
	I0717 00:56:37.255866   29983 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:56:37.255912   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:56:37.255986   29983 ssh_runner.go:195] Run: cat /version.json
	I0717 00:56:37.256003   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHHostname
	I0717 00:56:37.258661   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.258913   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.259053   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:37.259082   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.259183   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:56:37.259282   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:37.259360   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:37.259576   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHPort
	I0717 00:56:37.259591   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:37.259746   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHKeyPath
	I0717 00:56:37.259761   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:56:37.259936   29983 main.go:141] libmachine: (ha-029113) Calling .GetSSHUsername
	I0717 00:56:37.259995   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:56:37.260096   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/ha-029113/id_rsa Username:docker}
	I0717 00:56:37.332211   29983 ssh_runner.go:195] Run: systemctl --version
	I0717 00:56:37.358582   29983 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:56:37.519154   29983 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 00:56:37.524824   29983 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 00:56:37.524886   29983 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:56:37.533955   29983 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 00:56:37.533974   29983 start.go:495] detecting cgroup driver to use...
	I0717 00:56:37.534019   29983 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:56:37.550886   29983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:56:37.564477   29983 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:56:37.564535   29983 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:56:37.577933   29983 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:56:37.591423   29983 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:56:37.744811   29983 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:56:37.883175   29983 docker.go:233] disabling docker service ...
	I0717 00:56:37.883250   29983 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:56:37.899939   29983 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:56:37.912784   29983 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:56:38.053345   29983 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:56:38.201480   29983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:56:38.216109   29983 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:56:38.236452   29983 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:56:38.236521   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.247620   29983 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:56:38.247679   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.258380   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.268968   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.279830   29983 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:56:38.290311   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.300507   29983 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.311975   29983 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:56:38.322247   29983 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:56:38.331495   29983 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:56:38.340950   29983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:56:38.481153   29983 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:56:38.757154   29983 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:56:38.757233   29983 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:56:38.762884   29983 start.go:563] Will wait 60s for crictl version
	I0717 00:56:38.762936   29983 ssh_runner.go:195] Run: which crictl
	I0717 00:56:38.766933   29983 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:56:38.802395   29983 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 00:56:38.802477   29983 ssh_runner.go:195] Run: crio --version
	I0717 00:56:38.835518   29983 ssh_runner.go:195] Run: crio --version
	I0717 00:56:38.866346   29983 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 00:56:38.867786   29983 main.go:141] libmachine: (ha-029113) Calling .GetIP
	I0717 00:56:38.870376   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:38.870822   29983 main.go:141] libmachine: (ha-029113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:d5:10", ip: ""} in network mk-ha-029113: {Iface:virbr1 ExpiryTime:2024-07-17 01:43:43 +0000 UTC Type:0 Mac:52:54:00:04:d5:10 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-029113 Clientid:01:52:54:00:04:d5:10}
	I0717 00:56:38.870848   29983 main.go:141] libmachine: (ha-029113) DBG | domain ha-029113 has defined IP address 192.168.39.95 and MAC address 52:54:00:04:d5:10 in network mk-ha-029113
	I0717 00:56:38.871035   29983 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 00:56:38.875939   29983 kubeadm.go:883] updating cluster {Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:56:38.876067   29983 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:56:38.876101   29983 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:56:38.922327   29983 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:56:38.922349   29983 crio.go:433] Images already preloaded, skipping extraction
	I0717 00:56:38.922427   29983 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:56:38.959437   29983 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:56:38.959457   29983 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:56:38.959465   29983 kubeadm.go:934] updating node { 192.168.39.95 8443 v1.30.2 crio true true} ...
	I0717 00:56:38.959568   29983 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-029113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:56:38.959649   29983 ssh_runner.go:195] Run: crio config
	I0717 00:56:39.013116   29983 cni.go:84] Creating CNI manager for ""
	I0717 00:56:39.013136   29983 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 00:56:39.013146   29983 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:56:39.013175   29983 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-029113 NodeName:ha-029113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:56:39.013307   29983 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-029113"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:56:39.013325   29983 kube-vip.go:115] generating kube-vip config ...
	I0717 00:56:39.013366   29983 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 00:56:39.025164   29983 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 00:56:39.025279   29983 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 00:56:39.025330   29983 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:56:39.035063   29983 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:56:39.035135   29983 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 00:56:39.044312   29983 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0717 00:56:39.060961   29983 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:56:39.077560   29983 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0717 00:56:39.094429   29983 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 00:56:39.112781   29983 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 00:56:39.116810   29983 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:56:39.260438   29983 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:56:39.276419   29983 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113 for IP: 192.168.39.95
	I0717 00:56:39.276455   29983 certs.go:194] generating shared ca certs ...
	I0717 00:56:39.276469   29983 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:56:39.276640   29983 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 00:56:39.276688   29983 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 00:56:39.276696   29983 certs.go:256] generating profile certs ...
	I0717 00:56:39.276807   29983 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/client.key
	I0717 00:56:39.276842   29983 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.5f30ed13
	I0717 00:56:39.276862   29983 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.5f30ed13 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.95 192.168.39.166 192.168.39.100 192.168.39.254]
	I0717 00:56:39.417192   29983 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.5f30ed13 ...
	I0717 00:56:39.417223   29983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.5f30ed13: {Name:mka5e562e601efbe0a1950f918014c0baf1c3196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:56:39.417392   29983 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.5f30ed13 ...
	I0717 00:56:39.417404   29983 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.5f30ed13: {Name:mkaf79bf149acd16cf17ccae5a21d9e04c41a0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:56:39.417472   29983 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt.5f30ed13 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt
	I0717 00:56:39.417602   29983 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key.5f30ed13 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key
	I0717 00:56:39.417718   29983 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key
	I0717 00:56:39.417732   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:56:39.417744   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:56:39.417755   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:56:39.417767   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:56:39.417779   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:56:39.417789   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:56:39.417800   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:56:39.417812   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:56:39.417896   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 00:56:39.417937   29983 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 00:56:39.417946   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 00:56:39.417970   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 00:56:39.417999   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:56:39.418028   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 00:56:39.418063   29983 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 00:56:39.418088   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:56:39.418101   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem -> /usr/share/ca-certificates/11259.pem
	I0717 00:56:39.418113   29983 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /usr/share/ca-certificates/112592.pem
	I0717 00:56:39.418636   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:56:39.444603   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:56:39.468830   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:56:39.492862   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:56:39.516126   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 00:56:39.539010   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:56:39.562601   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:56:39.587034   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/ha-029113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:56:39.610364   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:56:39.633229   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 00:56:39.655960   29983 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 00:56:39.679007   29983 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:56:39.695395   29983 ssh_runner.go:195] Run: openssl version
	I0717 00:56:39.701087   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 00:56:39.712227   29983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 00:56:39.716576   29983 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 00:56:39.716620   29983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 00:56:39.722207   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:56:39.731284   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:56:39.741551   29983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:56:39.745990   29983 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:56:39.746033   29983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:56:39.752288   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:56:39.761612   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 00:56:39.772522   29983 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 00:56:39.798690   29983 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 00:56:39.798786   29983 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 00:56:39.838893   29983 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 00:56:39.852506   29983 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:56:39.869058   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 00:56:39.877232   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 00:56:39.887825   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 00:56:39.917174   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 00:56:39.931385   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 00:56:40.012309   29983 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 00:56:40.031972   29983 kubeadm.go:392] StartCluster: {Name:ha-029113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-029113 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.166 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:56:40.032091   29983 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:56:40.032163   29983 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:56:40.312233   29983 cri.go:89] found id: "68f837469c555571a915d20ae768d0ef5c7c7dbd1860e545e596fe6c20674da3"
	I0717 00:56:40.312257   29983 cri.go:89] found id: "f5a9880ef5b625bad2f5157bf22504ce6e66f5f00d6c08e82ff184c60e4597df"
	I0717 00:56:40.312263   29983 cri.go:89] found id: "b3e15314572524bc8ab46c72e1e61c148971453ca54384e37efa2a758b66e153"
	I0717 00:56:40.312268   29983 cri.go:89] found id: "708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc"
	I0717 00:56:40.312272   29983 cri.go:89] found id: "0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa"
	I0717 00:56:40.312276   29983 cri.go:89] found id: "14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e"
	I0717 00:56:40.312280   29983 cri.go:89] found id: "21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909"
	I0717 00:56:40.312283   29983 cri.go:89] found id: "535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11"
	I0717 00:56:40.312287   29983 cri.go:89] found id: "af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85"
	I0717 00:56:40.312295   29983 cri.go:89] found id: "8ad50613626477643d3e0c2f0a01a20d0cc987aa6e58083bbf3993d41f97acd0"
	I0717 00:56:40.312300   29983 cri.go:89] found id: ""
	I0717 00:56:40.312349   29983 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.728485543Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b5df73f6-d799-4568-87b6-4290f8dff2e8 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.728599838Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1721177800462786903,StartedAt:1721177800654446129,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b30623d3b177a2ad33eea05d973c32ae/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b30623d3b177a2ad33eea05d973c32ae/containers/etcd/a8bbba16,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-ha-029113_b3062
3d3b177a2ad33eea05d973c32ae/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b5df73f6-d799-4568-87b6-4290f8dff2e8 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.752182231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50385497-88c0-48b3-b3c3-4a5848c45c9b name=/runtime.v1.RuntimeService/Version
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.752250677Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50385497-88c0-48b3-b3c3-4a5848c45c9b name=/runtime.v1.RuntimeService/Version
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.753332832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e5d9c0f-50f0-47d9-b15f-30e3826b3760 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.754601859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178107754574093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e5d9c0f-50f0-47d9-b15f-30e3826b3760 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.757494101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5972c8b5-e97c-443d-a2fb-4ab649f09251 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.757696838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5972c8b5-e97c-443d-a2fb-4ab649f09251 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.758629674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af51385090a4705d923df398d40d5eb591f0facb435053b22a9ac19fd2c5d77,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177904445420635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177847467500435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a549df20ed996b85dfd4a228aafa38755eb325c2b8eea9d194a2f39a43c6f997,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177843451437451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f98a847746b8a00d932181e11b880825032ec553e6de48f6736a756f14df2,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177842445013762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0474fb323a70672e8f200cddcb9f7c9eb582f1c30aad6b8c69e99a1d86da9ca,PodSandboxId:d9df264711dcb7bb9b84b414a85826bc58ef13be7d716a88cc51ceec878d5b6d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177833782339596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea42e5105cf6ac5a5d0ba80d2db43d797d8b7ed3be4351ebdfbba6932081003,PodSandboxId:e21bd8195ae4f1c5d26e7e689a507f5065302c1692275061cdd6f99690de8b4b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721177816083064296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3a6e785cd2094ab5381e06e8b1758d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3dd104a40414a4c184f292042d98c34c4f2a081a0be74523d1c7da92597f487,PodSandboxId:12b80a1b9425668e9ab47d88685121897996822623edab194b9fb1ac50d00fc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721177800740417213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7861b2bf7bfee994fa80936b6d31f5ccf7960c4e0a63bd4312af282fca47083f,PodSandboxId:f692177694e3d7c7e6a87ec2b166f6cac56503094e7e2649eca372934922baf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800682850306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d7b223099b0137ef8cae73ccac8ee50e2f9c818656cb2c2674f8d8d1514fd0,PodSandboxId:b7cdeb95f25db3f6f2503c8abf76267b222caacd21d0b1a8e67e34b512229584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800522460835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcb54666e9d9613e1f68d0f760cc3e0c95b06e2fe3747f5a97c72ba0a238d16,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177800526688585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-cont
roller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effa58e46e219c34739e40820dbe2178f075c45e540541619e339690fa06184,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177800508238100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-0291
13,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:900f665d54096f7765fdf4465f1760855d73180627d403518f43e27d98beda89,PodSandboxId:bec07272b68700204eb042d072de745e98b89e96c1ca645a5926e6ddb134986e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721177800296153947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7221f2622be960846131245cf8a060743228bd2fb4267445a85ad2461e84f042,PodSandboxId:30c1fcbc2ced4efc64c53697813def8a44ad5787e4d0059a9098e5a0ee6315d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177800493353171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8,PodSandboxId:664b36c878574a28b197c9c401e29493c78b7a534fc931c89a99b95559ad0030,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177800219412908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177307303963678,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077742775453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kuber
netes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077777282664,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721177065689995452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177060624682903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177041034280990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b94
0,State:CONTAINER_EXITED,CreatedAt:1721177040986220008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5972c8b5-e97c-443d-a2fb-4ab649f09251 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.790393860Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=7b7881ca-50a9-40d5-9439-ffa37b8141a1 name=/runtime.v1.RuntimeService/Status
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.790481557Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7b7881ca-50a9-40d5-9439-ffa37b8141a1 name=/runtime.v1.RuntimeService/Status
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.801405156Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24fa3e9e-ff59-4f26-80dd-fd8d31c6b859 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.801491903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24fa3e9e-ff59-4f26-80dd-fd8d31c6b859 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.802587964Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dddbdb85-fddc-4641-8da4-a720eb24e475 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.803104647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178107803083513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dddbdb85-fddc-4641-8da4-a720eb24e475 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.803589511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e14a94fa-83cf-44df-bb92-8853fc161ec6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.803661161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e14a94fa-83cf-44df-bb92-8853fc161ec6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.804193843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af51385090a4705d923df398d40d5eb591f0facb435053b22a9ac19fd2c5d77,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177904445420635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177847467500435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a549df20ed996b85dfd4a228aafa38755eb325c2b8eea9d194a2f39a43c6f997,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177843451437451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f98a847746b8a00d932181e11b880825032ec553e6de48f6736a756f14df2,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177842445013762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0474fb323a70672e8f200cddcb9f7c9eb582f1c30aad6b8c69e99a1d86da9ca,PodSandboxId:d9df264711dcb7bb9b84b414a85826bc58ef13be7d716a88cc51ceec878d5b6d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177833782339596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea42e5105cf6ac5a5d0ba80d2db43d797d8b7ed3be4351ebdfbba6932081003,PodSandboxId:e21bd8195ae4f1c5d26e7e689a507f5065302c1692275061cdd6f99690de8b4b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721177816083064296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3a6e785cd2094ab5381e06e8b1758d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3dd104a40414a4c184f292042d98c34c4f2a081a0be74523d1c7da92597f487,PodSandboxId:12b80a1b9425668e9ab47d88685121897996822623edab194b9fb1ac50d00fc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721177800740417213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7861b2bf7bfee994fa80936b6d31f5ccf7960c4e0a63bd4312af282fca47083f,PodSandboxId:f692177694e3d7c7e6a87ec2b166f6cac56503094e7e2649eca372934922baf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800682850306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d7b223099b0137ef8cae73ccac8ee50e2f9c818656cb2c2674f8d8d1514fd0,PodSandboxId:b7cdeb95f25db3f6f2503c8abf76267b222caacd21d0b1a8e67e34b512229584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800522460835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcb54666e9d9613e1f68d0f760cc3e0c95b06e2fe3747f5a97c72ba0a238d16,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177800526688585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-cont
roller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effa58e46e219c34739e40820dbe2178f075c45e540541619e339690fa06184,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177800508238100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-0291
13,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:900f665d54096f7765fdf4465f1760855d73180627d403518f43e27d98beda89,PodSandboxId:bec07272b68700204eb042d072de745e98b89e96c1ca645a5926e6ddb134986e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721177800296153947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7221f2622be960846131245cf8a060743228bd2fb4267445a85ad2461e84f042,PodSandboxId:30c1fcbc2ced4efc64c53697813def8a44ad5787e4d0059a9098e5a0ee6315d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177800493353171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8,PodSandboxId:664b36c878574a28b197c9c401e29493c78b7a534fc931c89a99b95559ad0030,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177800219412908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177307303963678,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077742775453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kuber
netes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077777282664,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721177065689995452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177060624682903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177041034280990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b94
0,State:CONTAINER_EXITED,CreatedAt:1721177040986220008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e14a94fa-83cf-44df-bb92-8853fc161ec6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.845996915Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d46d0340-d7f7-4b37-959b-bda70b889ac8 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.846070213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d46d0340-d7f7-4b37-959b-bda70b889ac8 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.847079673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92ed3c37-a33c-4c34-82e3-a2add19ca1ea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.847498543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721178107847473502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92ed3c37-a33c-4c34-82e3-a2add19ca1ea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.848353151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03e19897-1f66-4414-b9dc-67b77adf5d09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.848409275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03e19897-1f66-4414-b9dc-67b77adf5d09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:01:47 ha-029113 crio[3725]: time="2024-07-17 01:01:47.848966985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af51385090a4705d923df398d40d5eb591f0facb435053b22a9ac19fd2c5d77,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721177904445420635,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd737b5deaa70a51d564310f7dcbeb03991c78f098fa674b8fd190f3bed835,PodSandboxId:aeeb4918aaf739a0e238dee07c51549c82da07956754993f90b9cdf1199390e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721177847467500435,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f04e5d-469e-4432-bd31-dbe772194f84,},Annotations:map[string]string{io.kubernetes.container.hash: af7d5ead,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a549df20ed996b85dfd4a228aafa38755eb325c2b8eea9d194a2f39a43c6f997,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721177843451437451,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f98a847746b8a00d932181e11b880825032ec553e6de48f6736a756f14df2,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721177842445013762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0474fb323a70672e8f200cddcb9f7c9eb582f1c30aad6b8c69e99a1d86da9ca,PodSandboxId:d9df264711dcb7bb9b84b414a85826bc58ef13be7d716a88cc51ceec878d5b6d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721177833782339596,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annotations:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea42e5105cf6ac5a5d0ba80d2db43d797d8b7ed3be4351ebdfbba6932081003,PodSandboxId:e21bd8195ae4f1c5d26e7e689a507f5065302c1692275061cdd6f99690de8b4b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721177816083064296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca3a6e785cd2094ab5381e06e8b1758d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3dd104a40414a4c184f292042d98c34c4f2a081a0be74523d1c7da92597f487,PodSandboxId:12b80a1b9425668e9ab47d88685121897996822623edab194b9fb1ac50d00fc1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721177800740417213,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:7861b2bf7bfee994fa80936b6d31f5ccf7960c4e0a63bd4312af282fca47083f,PodSandboxId:f692177694e3d7c7e6a87ec2b166f6cac56503094e7e2649eca372934922baf3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800682850306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d7b223099b0137ef8cae73ccac8ee50e2f9c818656cb2c2674f8d8d1514fd0,PodSandboxId:b7cdeb95f25db3f6f2503c8abf76267b222caacd21d0b1a8e67e34b512229584,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721177800522460835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kubernetes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dcb54666e9d9613e1f68d0f760cc3e0c95b06e2fe3747f5a97c72ba0a238d16,PodSandboxId:d800735392b6631d97a7360f8d324cf0b91171e4b8b51d1647ffeebbbd651b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721177800526688585,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-cont
roller-manager-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d72ab88d686ab6cc9d0420aba5d01a15,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4effa58e46e219c34739e40820dbe2178f075c45e540541619e339690fa06184,PodSandboxId:b05df37c6d755d6a253d08512055765f22d4ebfafe4fe6b0fa8d3590f7c384be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721177800508238100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-0291
13,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb7b6e57bdff8504832bd80612dfbf7e,},Annotations:map[string]string{io.kubernetes.container.hash: c985e752,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:900f665d54096f7765fdf4465f1760855d73180627d403518f43e27d98beda89,PodSandboxId:bec07272b68700204eb042d072de745e98b89e96c1ca645a5926e6ddb134986e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721177800296153947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7221f2622be960846131245cf8a060743228bd2fb4267445a85ad2461e84f042,PodSandboxId:30c1fcbc2ced4efc64c53697813def8a44ad5787e4d0059a9098e5a0ee6315d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721177800493353171,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8,PodSandboxId:664b36c878574a28b197c9c401e29493c78b7a534fc931c89a99b95559ad0030,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721177800219412908,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf4870ffc6ba7cf7b50ae09cfb1b393746f7a5152c53089614e9b07b30aee219,PodSandboxId:a45c7f17109af295fd8afd8d3d7ac1b2d54517db5ca206ff393a5b9cc8c7cadb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721177307303963678,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pf5xn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c25795f2-3205-495b-83b1-e3afd79b87b5,},Annota
tions:map[string]string{io.kubernetes.container.hash: 78a43c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa,PodSandboxId:f8a5889bb1d2bc6fc103eb2de48515a8de335a3970107dc7e37e0c22d7a122a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077742775453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdlls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4344b971-b979-42f8-8fa8-01f2d64bb51a,},Annotations:map[string]string{io.kuber
netes.container.hash: 92aed468,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc,PodSandboxId:9323719ef65477afea1f0946bd6a2c1e18bb115e22dd9402ce719955b37f0450,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721177077777282664,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62m67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5029f9dc-6792-44d9-9296-ec5ab6d72274,},Annotations:map[string]string{io.kubernetes.container.hash: b4292bc8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e,PodSandboxId:a30304f1d93beec21b943e8d0bdda82fd14ec8ef078fe74dc48282c421a8da13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721177065689995452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8xg7d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a612c634-49ef-4357-9b36-f5cc6604bdd7,},Annotations:map[string]string{io.kubernetes.container.hash: cdf5f516,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909,PodSandboxId:9fc93d7901e92fce9ee6a04a1927e3489dc770d57d9fd38dac899f3ab057cf7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721177060624682903,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg2kp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9243f4-bcc0-406a-a8f2-ccdbc00f6341,},Annotations:map[string]string{io.kubernetes.container.hash: bf5cb09d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11,PodSandboxId:9dca109899a3fdfc61bdea7b81459635bde9c71670c2636ee3f9b16cec6a2bcb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062
788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721177041034280990,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b30623d3b177a2ad33eea05d973c32ae,},Annotations:map[string]string{io.kubernetes.container.hash: 237bdd65,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85,PodSandboxId:5eb5a4397caa30b268143518f9f2a1880ae38ebd81aa2e5c40e7883a1c9c49b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b94
0,State:CONTAINER_EXITED,CreatedAt:1721177040986220008,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-029113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfaf2cf7acfa42b80c634ab40c7656b2,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03e19897-1f66-4414-b9dc-67b77adf5d09 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7af51385090a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   aeeb4918aaf73       storage-provisioner
	cbcd737b5deaa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   aeeb4918aaf73       storage-provisioner
	a549df20ed996       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   2                   d800735392b66       kube-controller-manager-ha-029113
	6d7f98a847746       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            3                   b05df37c6d755       kube-apiserver-ha-029113
	d0474fb323a70       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   d9df264711dcb       busybox-fc5497c4f-pf5xn
	7ea42e5105cf6       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   e21bd8195ae4f       kube-vip-ha-029113
	d3dd104a40414       a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda                                      5 minutes ago       Running             kindnet-cni               1                   12b80a1b94256       kindnet-8xg7d
	7861b2bf7bfee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   f692177694e3d       coredns-7db6d8ff4d-62m67
	9dcb54666e9d9       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      5 minutes ago       Exited              kube-controller-manager   1                   d800735392b66       kube-controller-manager-ha-029113
	94d7b223099b0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   b7cdeb95f25db       coredns-7db6d8ff4d-xdlls
	4effa58e46e21       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      5 minutes ago       Exited              kube-apiserver            2                   b05df37c6d755       kube-apiserver-ha-029113
	7221f2622be96       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      5 minutes ago       Running             kube-scheduler            1                   30c1fcbc2ced4       kube-scheduler-ha-029113
	900f665d54096       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      5 minutes ago       Running             kube-proxy                1                   bec07272b6870       kube-proxy-hg2kp
	b25c762688412       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   664b36c878574       etcd-ha-029113
	cf4870ffc6ba7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   a45c7f17109af       busybox-fc5497c4f-pf5xn
	708012203a1a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   9323719ef6547       coredns-7db6d8ff4d-62m67
	0f3b600dde660       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   f8a5889bb1d2b       coredns-7db6d8ff4d-xdlls
	14ce89e605287       docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381    17 minutes ago      Exited              kindnet-cni               0                   a30304f1d93be       kindnet-8xg7d
	21b3cbbc53732       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      17 minutes ago      Exited              kube-proxy                0                   9fc93d7901e92       kube-proxy-hg2kp
	535a2b743f28f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   9dca109899a3f       etcd-ha-029113
	af1a2d97ac6f8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      17 minutes ago      Exited              kube-scheduler            0                   5eb5a4397caa3       kube-scheduler-ha-029113
	
	
	==> coredns [0f3b600dde6603420f9edd85561f798c02ecea709368a2a6b922a3e812198caa] <==
	[INFO] 10.244.1.2:60685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000209662s
	[INFO] 10.244.1.2:59157 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00911229s
	[INFO] 10.244.0.4:33726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164542s
	[INFO] 10.244.0.4:35638 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107633s
	[INFO] 10.244.0.4:36083 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148234s
	[INFO] 10.244.0.4:49455 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000157722s
	[INFO] 10.244.2.2:43892 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122973s
	[INFO] 10.244.2.2:45729 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013946s
	[INFO] 10.244.0.4:55198 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100375s
	[INFO] 10.244.0.4:59468 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106412s
	[INFO] 10.244.0.4:37401 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124966s
	[INFO] 10.244.0.4:60799 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012109s
	[INFO] 10.244.2.2:34189 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127044s
	[INFO] 10.244.2.2:42164 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116232s
	[INFO] 10.244.2.2:45045 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090238s
	[INFO] 10.244.1.2:51035 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200282s
	[INFO] 10.244.1.2:55956 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000190607s
	[INFO] 10.244.1.2:54538 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000177763s
	[INFO] 10.244.0.4:33888 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013825s
	[INFO] 10.244.2.2:47245 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000251032s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [708012203a1a0c288ad05e4f57829807ab3a232d3085326c1ba8459d2a3964fc] <==
	[INFO] 10.244.0.4:59867 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056093s
	[INFO] 10.244.0.4:34082 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003321s
	[INFO] 10.244.2.2:43902 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001968463s
	[INFO] 10.244.2.2:54035 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207692s
	[INFO] 10.244.2.2:33997 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001225386s
	[INFO] 10.244.2.2:45029 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109563s
	[INFO] 10.244.2.2:39017 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092433s
	[INFO] 10.244.2.2:54230 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169232s
	[INFO] 10.244.1.2:47885 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195059s
	[INFO] 10.244.1.2:52609 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101063s
	[INFO] 10.244.1.2:45870 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090685s
	[INFO] 10.244.1.2:54516 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081368s
	[INFO] 10.244.2.2:33988 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080469s
	[INFO] 10.244.1.2:34772 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000287318s
	[INFO] 10.244.0.4:35803 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085391s
	[INFO] 10.244.0.4:50190 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000162301s
	[INFO] 10.244.0.4:40910 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130903s
	[INFO] 10.244.2.2:33875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129913s
	[INFO] 10.244.2.2:51223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090521s
	[INFO] 10.244.2.2:58679 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073592s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7861b2bf7bfee994fa80936b6d31f5ccf7960c4e0a63bd4312af282fca47083f] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59206->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:59206->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [94d7b223099b0137ef8cae73ccac8ee50e2f9c818656cb2c2674f8d8d1514fd0] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52300->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43352->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43352->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-029113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_44_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:44:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:01:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:00:27 +0000   Wed, 17 Jul 2024 01:00:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:00:27 +0000   Wed, 17 Jul 2024 01:00:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:00:27 +0000   Wed, 17 Jul 2024 01:00:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:00:27 +0000   Wed, 17 Jul 2024 01:00:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-029113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a51546f0529f4ddaa3a150daaabbe791
	  System UUID:                a51546f0-529f-4dda-a3a1-50daaabbe791
	  Boot ID:                    644e2f47-3b52-421d-bf4d-394d43757773
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pf5xn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-62m67             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-xdlls             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-029113                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-8xg7d                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-029113             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-029113    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-hg2kp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-029113             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-029113                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 4m24s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           17m                    node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Warning  ContainerGCFailed        5m41s (x2 over 6m41s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-029113 event: Registered Node ha-029113 in Controller
	  Normal   NodeNotReady             106s                   node-controller  Node ha-029113 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     81s (x2 over 17m)      kubelet          Node ha-029113 status is now: NodeHasSufficientPID
	  Normal   NodeReady                81s (x2 over 17m)      kubelet          Node ha-029113 status is now: NodeReady
	  Normal   NodeHasNoDiskPressure    81s (x2 over 17m)      kubelet          Node ha-029113 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  81s (x2 over 17m)      kubelet          Node ha-029113 status is now: NodeHasSufficientMemory
	
	
	Name:               ha-029113-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_46_39_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:46:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:01:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:58:11 +0000   Wed, 17 Jul 2024 00:57:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:58:11 +0000   Wed, 17 Jul 2024 00:57:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:58:11 +0000   Wed, 17 Jul 2024 00:57:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:58:11 +0000   Wed, 17 Jul 2024 00:57:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.166
	  Hostname:    ha-029113-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 caba57241163431db23fb698d4481f00
	  System UUID:                caba5724-1163-431d-b23f-b698d4481f00
	  Boot ID:                    1016b632-988b-43a4-974f-76e3427c540b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l4ctd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-029113-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-k7vzq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-029113-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-029113-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-2wz5p                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-029113-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-029113-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m17s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-029113-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-029113-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-029113-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-029113-m02 status is now: NodeNotReady
	  Normal  Starting                 4m46s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m45s (x8 over 4m46s)  kubelet          Node ha-029113-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s (x8 over 4m46s)  kubelet          Node ha-029113-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s (x7 over 4m46s)  kubelet          Node ha-029113-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-029113-m02 event: Registered Node ha-029113-m02 in Controller
	
	
	Name:               ha-029113-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-029113-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=ha-029113
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T00_49_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:49:04 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-029113-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:59:21 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 00:59:01 +0000   Wed, 17 Jul 2024 01:00:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 00:59:01 +0000   Wed, 17 Jul 2024 01:00:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 00:59:01 +0000   Wed, 17 Jul 2024 01:00:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 00:59:01 +0000   Wed, 17 Jul 2024 01:00:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-029113-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6434efc175e64e719bbbb464b6a52834
	  System UUID:                6434efc1-75e6-4e71-9bbb-b464b6a52834
	  Boot ID:                    74c0a9ad-c535-431c-82ea-b31b0cc95fdd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vclnh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-8d2dk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-m559l           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-029113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-029113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-029113-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-029113-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   NodeNotReady             3m34s                  node-controller  Node ha-029113-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-029113-m04 event: Registered Node ha-029113-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x3 over 2m47s)  kubelet          Node ha-029113-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x3 over 2m47s)  kubelet          Node ha-029113-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x3 over 2m47s)  kubelet          Node ha-029113-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s (x2 over 2m47s)  kubelet          Node ha-029113-m04 has been rebooted, boot id: 74c0a9ad-c535-431c-82ea-b31b0cc95fdd
	  Normal   NodeReady                2m47s (x2 over 2m47s)  kubelet          Node ha-029113-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-029113-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +4.559408] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.079107] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.061657] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066531] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.165658] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.134006] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.290239] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.155347] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.001195] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.055206] kauditd_printk_skb: 158 callbacks suppressed
	[Jul17 00:44] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.000074] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +6.568819] kauditd_printk_skb: 23 callbacks suppressed
	[ +12.108545] kauditd_printk_skb: 29 callbacks suppressed
	[Jul17 00:46] kauditd_printk_skb: 26 callbacks suppressed
	[Jul17 00:56] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.146852] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.167572] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.135587] systemd-fstab-generator[3682]: Ignoring "noauto" option for root device
	[  +0.292587] systemd-fstab-generator[3710]: Ignoring "noauto" option for root device
	[  +0.777452] systemd-fstab-generator[3810]: Ignoring "noauto" option for root device
	[ +16.891540] kauditd_printk_skb: 217 callbacks suppressed
	[Jul17 00:57] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [535a2b743f28f9760f7341aeb36dd3506cf26bdec15966f0498cbd52b73c8d11] <==
	{"level":"warn","ts":"2024-07-17T00:55:06.297665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"643.905507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-17T00:55:06.303164Z","caller":"traceutil/trace.go:171","msg":"trace[810503239] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; }","duration":"649.416567ms","start":"2024-07-17T00:55:05.653736Z","end":"2024-07-17T00:55:06.303153Z","steps":["trace[810503239] 'agreement among raft nodes before linearized reading'  (duration: 643.925132ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:55:06.303202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T00:55:05.653725Z","time spent":"649.468292ms","remote":"127.0.0.1:35806","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 "}
	2024/07/17 00:55:06 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T00:55:06.321988Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":6455787737204793371,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-07-17T00:55:06.433985Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.95:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:55:06.43407Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.95:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T00:55:06.434372Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a71e7bac075997","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-17T00:55:06.434746Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.434888Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.434941Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.435016Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.435081Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.435171Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a71e7bac075997","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.435219Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"326b6bc7c441ede5"}
	{"level":"info","ts":"2024-07-17T00:55:06.435244Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435298Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435346Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435411Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435462Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435579Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.435608Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:55:06.438237Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-07-17T00:55:06.438393Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-07-17T00:55:06.438431Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-029113","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"]}
	
	
	==> etcd [b25c7626884123d45ed391ba1f319608a2ece44fa3c23574de75c74d462299d8] <==
	{"level":"info","ts":"2024-07-17T00:58:15.324289Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:58:15.327541Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a71e7bac075997","to":"dae0f4ef8a06525b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-17T00:58:15.327672Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"warn","ts":"2024-07-17T00:58:16.337272Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dae0f4ef8a06525b","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T00:58:16.339457Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dae0f4ef8a06525b","rtt":"0s","error":"dial tcp 192.168.39.100:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-17T00:58:18.266379Z","caller":"traceutil/trace.go:171","msg":"trace[1916172587] transaction","detail":"{read_only:false; response_revision:2469; number_of_response:1; }","duration":"149.837891ms","start":"2024-07-17T00:58:18.116512Z","end":"2024-07-17T00:58:18.26635Z","steps":["trace[1916172587] 'process raft request'  (duration: 149.73848ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:59:14.440338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 switched to configuration voters=(47039837626653079 3633116030139756005)"}
	{"level":"info","ts":"2024-07-17T00:59:14.442875Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","removed-remote-peer-id":"dae0f4ef8a06525b","removed-remote-peer-urls":["https://192.168.39.100:2380"]}
	{"level":"info","ts":"2024-07-17T00:59:14.443042Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"warn","ts":"2024-07-17T00:59:14.443075Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"a71e7bac075997","removed-member-id":"dae0f4ef8a06525b"}
	{"level":"warn","ts":"2024-07-17T00:59:14.443141Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-07-17T00:59:14.443656Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:59:14.443763Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"warn","ts":"2024-07-17T00:59:14.444108Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:59:14.444196Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:59:14.444496Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"warn","ts":"2024-07-17T00:59:14.444689Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b","error":"context canceled"}
	{"level":"warn","ts":"2024-07-17T00:59:14.44476Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"dae0f4ef8a06525b","error":"failed to read dae0f4ef8a06525b on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-17T00:59:14.444977Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"warn","ts":"2024-07-17T00:59:14.445863Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b","error":"context canceled"}
	{"level":"info","ts":"2024-07-17T00:59:14.445889Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a71e7bac075997","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:59:14.445903Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"info","ts":"2024-07-17T00:59:14.445916Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"a71e7bac075997","removed-remote-peer-id":"dae0f4ef8a06525b"}
	{"level":"warn","ts":"2024-07-17T00:59:14.462029Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a71e7bac075997","remote-peer-id-stream-handler":"a71e7bac075997","remote-peer-id-from":"dae0f4ef8a06525b"}
	{"level":"warn","ts":"2024-07-17T00:59:14.464385Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a71e7bac075997","remote-peer-id-stream-handler":"a71e7bac075997","remote-peer-id-from":"dae0f4ef8a06525b"}
	
	
	==> kernel <==
	 01:01:48 up 18 min,  0 users,  load average: 0.21, 0.38, 0.28
	Linux ha-029113 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14ce89e605287179bb1c9551273ee5355577c744e43f9e9eee59d6939e47680e] <==
	I0717 00:54:36.816483       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:54:36.816533       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:54:36.817701       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:54:36.817750       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:54:36.817913       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:54:36.817937       1 main.go:303] handling current node
	I0717 00:54:36.817949       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:54:36.817955       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:54:46.821425       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:54:46.821481       1 main.go:303] handling current node
	I0717 00:54:46.821501       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:54:46.821507       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:54:46.821645       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:54:46.821669       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:54:46.821716       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:54:46.821735       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 00:54:56.821109       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 00:54:56.821201       1 main.go:303] handling current node
	I0717 00:54:56.821228       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 00:54:56.821245       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 00:54:56.821394       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 00:54:56.821415       1 main.go:326] Node ha-029113-m03 has CIDR [10.244.2.0/24] 
	I0717 00:54:56.821471       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 00:54:56.821488       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	E0717 00:55:04.337495       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [d3dd104a40414a4c184f292042d98c34c4f2a081a0be74523d1c7da92597f487] <==
	I0717 01:01:02.120337       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 01:01:12.116567       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 01:01:12.116760       1 main.go:303] handling current node
	I0717 01:01:12.116926       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 01:01:12.116940       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 01:01:12.117157       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 01:01:12.117191       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 01:01:22.125470       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 01:01:22.125630       1 main.go:303] handling current node
	I0717 01:01:22.125684       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 01:01:22.125704       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 01:01:22.126014       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 01:01:22.126066       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 01:01:32.116457       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 01:01:32.116507       1 main.go:303] handling current node
	I0717 01:01:32.116523       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 01:01:32.116528       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 01:01:32.116671       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 01:01:32.116697       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	I0717 01:01:42.116346       1 main.go:299] Handling node with IPs: map[192.168.39.95:{}]
	I0717 01:01:42.116448       1 main.go:303] handling current node
	I0717 01:01:42.116477       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0717 01:01:42.116497       1 main.go:326] Node ha-029113-m02 has CIDR [10.244.1.0/24] 
	I0717 01:01:42.116699       1 main.go:299] Handling node with IPs: map[192.168.39.48:{}]
	I0717 01:01:42.116741       1 main.go:326] Node ha-029113-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4effa58e46e219c34739e40820dbe2178f075c45e540541619e339690fa06184] <==
	I0717 00:56:41.243616       1 options.go:221] external host was not specified, using 192.168.39.95
	I0717 00:56:41.249139       1 server.go:148] Version: v1.30.2
	I0717 00:56:41.249451       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:56:41.854341       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 00:56:41.858892       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 00:56:41.862267       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 00:56:41.862342       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 00:56:41.862520       1 instance.go:299] Using reconciler: lease
	W0717 00:57:01.852194       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 00:57:01.852194       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 00:57:01.864009       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 00:57:01.864133       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [6d7f98a847746b8a00d932181e11b880825032ec553e6de48f6736a756f14df2] <==
	I0717 00:57:24.799733       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0717 00:57:24.799766       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0717 00:57:24.846914       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 00:57:24.850523       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 00:57:24.850566       1 policy_source.go:224] refreshing policies
	I0717 00:57:24.865394       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 00:57:24.881576       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 00:57:24.881635       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 00:57:24.881983       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 00:57:24.882021       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 00:57:24.882027       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 00:57:24.882151       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 00:57:24.886557       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 00:57:24.886606       1 aggregator.go:165] initial CRD sync complete...
	I0717 00:57:24.886633       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 00:57:24.886658       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 00:57:24.886666       1 cache.go:39] Caches are synced for autoregister controller
	I0717 00:57:24.887861       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0717 00:57:24.894139       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.166]
	I0717 00:57:24.895402       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 00:57:24.901899       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 00:57:24.905059       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 00:57:25.788738       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 00:57:26.220565       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.166 192.168.39.95]
	W0717 00:57:36.220005       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.166 192.168.39.95]
	
	
	==> kube-controller-manager [9dcb54666e9d9613e1f68d0f760cc3e0c95b06e2fe3747f5a97c72ba0a238d16] <==
	I0717 00:56:41.978420       1 serving.go:380] Generated self-signed cert in-memory
	I0717 00:56:42.335650       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 00:56:42.335738       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:56:42.337619       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 00:56:42.337923       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 00:56:42.338064       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 00:56:42.338197       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0717 00:57:02.872443       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.95:8443/healthz\": dial tcp 192.168.39.95:8443: connect: connection refused"
	
	
	==> kube-controller-manager [a549df20ed996b85dfd4a228aafa38755eb325c2b8eea9d194a2f39a43c6f997] <==
	E0717 00:59:37.421509       1 gc_controller.go:153] "Failed to get node" err="node \"ha-029113-m03\" not found" logger="pod-garbage-collector-controller" node="ha-029113-m03"
	E0717 00:59:37.421515       1 gc_controller.go:153] "Failed to get node" err="node \"ha-029113-m03\" not found" logger="pod-garbage-collector-controller" node="ha-029113-m03"
	E0717 00:59:57.422049       1 gc_controller.go:153] "Failed to get node" err="node \"ha-029113-m03\" not found" logger="pod-garbage-collector-controller" node="ha-029113-m03"
	E0717 00:59:57.422771       1 gc_controller.go:153] "Failed to get node" err="node \"ha-029113-m03\" not found" logger="pod-garbage-collector-controller" node="ha-029113-m03"
	E0717 00:59:57.422903       1 gc_controller.go:153] "Failed to get node" err="node \"ha-029113-m03\" not found" logger="pod-garbage-collector-controller" node="ha-029113-m03"
	E0717 00:59:57.422937       1 gc_controller.go:153] "Failed to get node" err="node \"ha-029113-m03\" not found" logger="pod-garbage-collector-controller" node="ha-029113-m03"
	E0717 00:59:57.422980       1 gc_controller.go:153] "Failed to get node" err="node \"ha-029113-m03\" not found" logger="pod-garbage-collector-controller" node="ha-029113-m03"
	I0717 01:00:02.551194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="33.420329ms"
	I0717 01:00:02.551302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.715µs"
	I0717 01:00:02.594675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.501496ms"
	I0717 01:00:02.595714       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.297µs"
	I0717 01:00:02.713332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.969896ms"
	I0717 01:00:02.714407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.366µs"
	I0717 01:00:02.737518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="22.214243ms"
	I0717 01:00:02.737681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.294µs"
	I0717 01:00:27.504856       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mgh88 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mgh88\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 01:00:27.505204       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"2dfdc45f-01de-43ca-8e95-5df7519203dd", APIVersion:"v1", ResourceVersion:"243", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mgh88 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mgh88": the object has been modified; please apply your changes to the latest version and try again
	I0717 01:00:27.539067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="69.050428ms"
	I0717 01:00:27.539199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.92µs"
	I0717 01:00:27.582754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.700155ms"
	I0717 01:00:27.583655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.96µs"
	I0717 01:00:27.756657       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-mgh88 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-mgh88\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 01:00:27.757063       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"2dfdc45f-01de-43ca-8e95-5df7519203dd", APIVersion:"v1", ResourceVersion:"243", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-mgh88 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-mgh88": the object has been modified; please apply your changes to the latest version and try again
	I0717 01:00:27.797710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.679061ms"
	I0717 01:00:27.798080       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="195.903µs"
	
	
	==> kube-proxy [21b3cbbc53732e70a6bb66be9909790395a4901c4730eea9ec8b9349fed80909] <==
	E0717 00:53:51.144186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:53:51.144150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:53:51.144279       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:53:51.144315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:53:51.144293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:53:57.800237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:53:57.800311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:53:57.800488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:53:57.800553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:53:57.800637       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:53:57.800716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:07.017417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:07.017417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:07.017636       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:07.017664       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:10.089783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:10.089893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:25.448706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:25.448791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2012": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:28.520441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:28.521183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:28.521768       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:28.522078       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:54:56.168404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:54:56.168486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [900f665d54096f7765fdf4465f1760855d73180627d403518f43e27d98beda89] <==
	E0717 00:57:05.193130       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-029113\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 00:57:23.643893       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-029113\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0717 00:57:23.644616       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0717 00:57:23.902466       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 00:57:23.902681       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 00:57:23.902789       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:57:23.924091       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:57:23.926207       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:57:23.926694       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:57:23.928583       1 config.go:192] "Starting service config controller"
	I0717 00:57:23.928718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:57:23.928959       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:57:23.928984       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:57:23.930667       1 config.go:319] "Starting node config controller"
	I0717 00:57:23.930705       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0717 00:57:26.696774       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0717 00:57:26.697325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:57:26.698174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:57:26.698343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:57:26.698438       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 00:57:26.698568       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 00:57:26.698649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-029113&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0717 00:57:28.029701       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:57:28.031012       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:57:28.329989       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7221f2622be960846131245cf8a060743228bd2fb4267445a85ad2461e84f042] <==
	W0717 00:57:19.704178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.95:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:19.704242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.95:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:20.007647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.95:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:20.007700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.95:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:20.264463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:20.264565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:20.361500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.95:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:20.361564       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.95:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:20.460543       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.95:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:20.460656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.95:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:21.539328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:21.539446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:21.699157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:21.699219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.95:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:21.803445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.95:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	E0717 00:57:21.803545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.95:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.95:8443: connect: connection refused
	W0717 00:57:24.803379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:57:24.803473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:57:24.803559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 00:57:24.803591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:57:24.803656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:57:24.803686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0717 00:57:38.180937       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 00:59:11.106372       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vclnh\": pod busybox-fc5497c4f-vclnh is already assigned to node \"ha-029113-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-vclnh" node="ha-029113-m04"
	E0717 00:59:11.106580       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-vclnh\": pod busybox-fc5497c4f-vclnh is already assigned to node \"ha-029113-m04\"" pod="default/busybox-fc5497c4f-vclnh"
	
	
	==> kube-scheduler [af1a2d97ac6f8ec07b35a9b5d767af997cb8270ce3b4e681cba5557ed7119c85] <==
	W0717 00:54:58.688887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:54:58.688936       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:54:58.792253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:58.792349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:58.816578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:54:58.816626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:54:58.909115       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:54:58.909192       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:54:59.016064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:59.016109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:59.141646       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:59.141779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 00:54:59.142680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:54:59.142735       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:54:59.374053       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:54:59.374105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:55:00.215574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:55:00.215643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:55:04.725028       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:55:04.725134       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:55:04.818347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 00:55:04.818400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 00:55:04.881210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:55:04.881314       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:55:06.264201       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 01:00:07 ha-029113 kubelet[1354]: E0717 01:00:07.513215    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:00:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:00:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:00:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:00:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:00:09 ha-029113 kubelet[1354]: E0717 01:00:09.094996    1354 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-029113\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-029113?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 17 01:00:09 ha-029113 kubelet[1354]: E0717 01:00:09.405855    1354 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-029113?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 17 01:00:17 ha-029113 kubelet[1354]: W0717 01:00:17.341847    1354 reflector.go:470] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 17 01:00:17 ha-029113 kubelet[1354]: E0717 01:00:17.341955    1354 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-029113?timeout=10s\": http2: client connection lost"
	Jul 17 01:00:17 ha-029113 kubelet[1354]: I0717 01:00:17.342470    1354 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Jul 17 01:00:17 ha-029113 kubelet[1354]: E0717 01:00:17.342011    1354 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-029113\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-029113?timeout=10s\": http2: client connection lost"
	Jul 17 01:00:17 ha-029113 kubelet[1354]: E0717 01:00:17.342600    1354 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 17 01:00:17 ha-029113 kubelet[1354]: W0717 01:00:17.341791    1354 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 17 01:00:17 ha-029113 kubelet[1354]: W0717 01:00:17.341884    1354 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 17 01:00:17 ha-029113 kubelet[1354]: W0717 01:00:17.342042    1354 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 17 01:00:17 ha-029113 kubelet[1354]: W0717 01:00:17.342058    1354 reflector.go:470] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 17 01:00:17 ha-029113 kubelet[1354]: W0717 01:00:17.342076    1354 reflector.go:470] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 17 01:00:17 ha-029113 kubelet[1354]: W0717 01:00:17.342096    1354 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 17 01:00:17 ha-029113 kubelet[1354]: W0717 01:00:17.342109    1354 reflector.go:470] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 17 01:00:17 ha-029113 kubelet[1354]: W0717 01:00:17.342128    1354 reflector.go:470] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Jul 17 01:01:07 ha-029113 kubelet[1354]: E0717 01:01:07.511536    1354 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:01:07 ha-029113 kubelet[1354]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:01:07 ha-029113 kubelet[1354]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:01:07 ha-029113 kubelet[1354]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:01:07 ha-029113 kubelet[1354]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:01:47.408171   32291 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19264-3908/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-029113 -n ha-029113
helpers_test.go:261: (dbg) Run:  kubectl --context ha-029113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (328.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-025900
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-025900
E0717 01:17:58.379824   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 01:18:20.229213   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-025900: exit status 82 (2m1.849759663s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-025900-m03"  ...
	* Stopping node "multinode-025900-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-025900" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-025900 --wait=true -v=8 --alsologtostderr
E0717 01:20:17.182643   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-025900 --wait=true -v=8 --alsologtostderr: (3m23.967100881s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-025900
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-025900 -n multinode-025900
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-025900 logs -n 25: (1.525585569s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m02:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3937052499/001/cp-test_multinode-025900-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m02:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900:/home/docker/cp-test_multinode-025900-m02_multinode-025900.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n multinode-025900 sudo cat                                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /home/docker/cp-test_multinode-025900-m02_multinode-025900.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m02:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03:/home/docker/cp-test_multinode-025900-m02_multinode-025900-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n multinode-025900-m03 sudo cat                                   | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /home/docker/cp-test_multinode-025900-m02_multinode-025900-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp testdata/cp-test.txt                                                | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m03:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3937052499/001/cp-test_multinode-025900-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m03:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900:/home/docker/cp-test_multinode-025900-m03_multinode-025900.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n multinode-025900 sudo cat                                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /home/docker/cp-test_multinode-025900-m03_multinode-025900.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m03:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m02:/home/docker/cp-test_multinode-025900-m03_multinode-025900-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n multinode-025900-m02 sudo cat                                   | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /home/docker/cp-test_multinode-025900-m03_multinode-025900-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-025900 node stop m03                                                          | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	| node    | multinode-025900 node start                                                             | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-025900                                                                | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC |                     |
	| stop    | -p multinode-025900                                                                     | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC |                     |
	| start   | -p multinode-025900                                                                     | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:18 UTC | 17 Jul 24 01:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-025900                                                                | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:18:54
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:18:54.327208   42204 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:18:54.327303   42204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:18:54.327308   42204 out.go:304] Setting ErrFile to fd 2...
	I0717 01:18:54.327312   42204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:18:54.327474   42204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:18:54.327989   42204 out.go:298] Setting JSON to false
	I0717 01:18:54.328862   42204 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3676,"bootTime":1721175458,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:18:54.328914   42204 start.go:139] virtualization: kvm guest
	I0717 01:18:54.331143   42204 out.go:177] * [multinode-025900] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:18:54.332472   42204 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:18:54.332492   42204 notify.go:220] Checking for updates...
	I0717 01:18:54.335012   42204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:18:54.336375   42204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:18:54.337593   42204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:18:54.338775   42204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:18:54.340009   42204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:18:54.341624   42204 config.go:182] Loaded profile config "multinode-025900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:18:54.341747   42204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:18:54.342215   42204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:18:54.342269   42204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:18:54.358168   42204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35567
	I0717 01:18:54.358611   42204 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:18:54.359269   42204 main.go:141] libmachine: Using API Version  1
	I0717 01:18:54.359291   42204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:18:54.359780   42204 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:18:54.359962   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:18:54.395275   42204 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:18:54.397064   42204 start.go:297] selected driver: kvm2
	I0717 01:18:54.397080   42204 start.go:901] validating driver "kvm2" against &{Name:multinode-025900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-025900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:18:54.397225   42204 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:18:54.397533   42204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:18:54.397594   42204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:18:54.412184   42204 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:18:54.412838   42204 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:18:54.412895   42204 cni.go:84] Creating CNI manager for ""
	I0717 01:18:54.412906   42204 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 01:18:54.412967   42204 start.go:340] cluster config:
	{Name:multinode-025900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-025900 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:18:54.413102   42204 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:18:54.414878   42204 out.go:177] * Starting "multinode-025900" primary control-plane node in "multinode-025900" cluster
	I0717 01:18:54.416278   42204 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:18:54.416314   42204 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 01:18:54.416323   42204 cache.go:56] Caching tarball of preloaded images
	I0717 01:18:54.416403   42204 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:18:54.416413   42204 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 01:18:54.416530   42204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/config.json ...
	I0717 01:18:54.416715   42204 start.go:360] acquireMachinesLock for multinode-025900: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:18:54.416770   42204 start.go:364] duration metric: took 32.615µs to acquireMachinesLock for "multinode-025900"
	I0717 01:18:54.416794   42204 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:18:54.416804   42204 fix.go:54] fixHost starting: 
	I0717 01:18:54.417082   42204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:18:54.417112   42204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:18:54.431615   42204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
	I0717 01:18:54.432024   42204 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:18:54.432606   42204 main.go:141] libmachine: Using API Version  1
	I0717 01:18:54.432629   42204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:18:54.432938   42204 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:18:54.433118   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:18:54.433263   42204 main.go:141] libmachine: (multinode-025900) Calling .GetState
	I0717 01:18:54.434709   42204 fix.go:112] recreateIfNeeded on multinode-025900: state=Running err=<nil>
	W0717 01:18:54.434745   42204 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:18:54.436640   42204 out.go:177] * Updating the running kvm2 "multinode-025900" VM ...
	I0717 01:18:54.437824   42204 machine.go:94] provisionDockerMachine start ...
	I0717 01:18:54.437840   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:18:54.438035   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:54.440632   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.441126   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:54.441161   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.441287   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:18:54.441441   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.441593   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.441728   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:18:54.441885   42204 main.go:141] libmachine: Using SSH client type: native
	I0717 01:18:54.442145   42204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0717 01:18:54.442159   42204 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:18:54.556368   42204 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025900
	
	I0717 01:18:54.556415   42204 main.go:141] libmachine: (multinode-025900) Calling .GetMachineName
	I0717 01:18:54.556679   42204 buildroot.go:166] provisioning hostname "multinode-025900"
	I0717 01:18:54.556705   42204 main.go:141] libmachine: (multinode-025900) Calling .GetMachineName
	I0717 01:18:54.556906   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:54.559573   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.559965   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:54.559994   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.560132   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:18:54.560344   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.560525   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.560645   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:18:54.560811   42204 main.go:141] libmachine: Using SSH client type: native
	I0717 01:18:54.561045   42204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0717 01:18:54.561064   42204 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025900 && echo "multinode-025900" | sudo tee /etc/hostname
	I0717 01:18:54.694675   42204 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025900
	
	I0717 01:18:54.694699   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:54.697386   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.697739   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:54.697779   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.697914   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:18:54.698106   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.698287   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.698424   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:18:54.698615   42204 main.go:141] libmachine: Using SSH client type: native
	I0717 01:18:54.698772   42204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0717 01:18:54.698794   42204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025900/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:18:54.811646   42204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:18:54.811687   42204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:18:54.811708   42204 buildroot.go:174] setting up certificates
	I0717 01:18:54.811717   42204 provision.go:84] configureAuth start
	I0717 01:18:54.811729   42204 main.go:141] libmachine: (multinode-025900) Calling .GetMachineName
	I0717 01:18:54.811976   42204 main.go:141] libmachine: (multinode-025900) Calling .GetIP
	I0717 01:18:54.814832   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.815277   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:54.815301   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.815448   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:54.817492   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.817780   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:54.817824   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.817930   42204 provision.go:143] copyHostCerts
	I0717 01:18:54.817955   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:18:54.817996   42204 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:18:54.818009   42204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:18:54.818080   42204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:18:54.818168   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:18:54.818191   42204 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:18:54.818198   42204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:18:54.818222   42204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:18:54.818273   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:18:54.818288   42204 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:18:54.818292   42204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:18:54.818312   42204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:18:54.818368   42204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.multinode-025900 san=[127.0.0.1 192.168.39.81 localhost minikube multinode-025900]
	I0717 01:18:55.044205   42204 provision.go:177] copyRemoteCerts
	I0717 01:18:55.044265   42204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:18:55.044295   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:55.047121   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:55.047495   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:55.047528   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:55.047665   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:18:55.047865   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:55.048024   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:18:55.048180   42204 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/multinode-025900/id_rsa Username:docker}
	I0717 01:18:55.134054   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 01:18:55.134116   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 01:18:55.159752   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 01:18:55.159829   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:18:55.184867   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 01:18:55.184930   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:18:55.210888   42204 provision.go:87] duration metric: took 399.158127ms to configureAuth
	I0717 01:18:55.210917   42204 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:18:55.211161   42204 config.go:182] Loaded profile config "multinode-025900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:18:55.211235   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:55.213940   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:55.214342   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:55.214370   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:55.214600   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:18:55.214824   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:55.214976   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:55.215092   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:18:55.215215   42204 main.go:141] libmachine: Using SSH client type: native
	I0717 01:18:55.215392   42204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0717 01:18:55.215413   42204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:20:25.991134   42204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:20:25.991167   42204 machine.go:97] duration metric: took 1m31.553331393s to provisionDockerMachine
	I0717 01:20:25.991180   42204 start.go:293] postStartSetup for "multinode-025900" (driver="kvm2")
	I0717 01:20:25.991195   42204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:20:25.991221   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:20:25.991527   42204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:20:25.991554   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:20:25.994613   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:25.995171   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:25.995202   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:25.995381   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:20:25.995581   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:20:25.995750   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:20:25.995860   42204 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/multinode-025900/id_rsa Username:docker}
	I0717 01:20:26.082624   42204 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:20:26.087103   42204 command_runner.go:130] > NAME=Buildroot
	I0717 01:20:26.087122   42204 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 01:20:26.087126   42204 command_runner.go:130] > ID=buildroot
	I0717 01:20:26.087131   42204 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 01:20:26.087136   42204 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 01:20:26.087328   42204 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:20:26.087346   42204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:20:26.087412   42204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:20:26.087481   42204 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:20:26.087490   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /etc/ssl/certs/112592.pem
	I0717 01:20:26.087569   42204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:20:26.098274   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:20:26.123198   42204 start.go:296] duration metric: took 132.001245ms for postStartSetup
	I0717 01:20:26.123236   42204 fix.go:56] duration metric: took 1m31.706433168s for fixHost
	I0717 01:20:26.123256   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:20:26.125986   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.126336   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:26.126375   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.126523   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:20:26.126710   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:20:26.126874   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:20:26.127031   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:20:26.127150   42204 main.go:141] libmachine: Using SSH client type: native
	I0717 01:20:26.127294   42204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0717 01:20:26.127303   42204 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:20:26.239432   42204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721179226.223766527
	
	I0717 01:20:26.239456   42204 fix.go:216] guest clock: 1721179226.223766527
	I0717 01:20:26.239466   42204 fix.go:229] Guest: 2024-07-17 01:20:26.223766527 +0000 UTC Remote: 2024-07-17 01:20:26.123240701 +0000 UTC m=+91.832936562 (delta=100.525826ms)
	I0717 01:20:26.239520   42204 fix.go:200] guest clock delta is within tolerance: 100.525826ms
	I0717 01:20:26.239536   42204 start.go:83] releasing machines lock for "multinode-025900", held for 1m31.822754441s
	I0717 01:20:26.239576   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:20:26.239817   42204 main.go:141] libmachine: (multinode-025900) Calling .GetIP
	I0717 01:20:26.242398   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.242768   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:26.242787   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.242932   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:20:26.243429   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:20:26.243595   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:20:26.243682   42204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:20:26.243718   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:20:26.243833   42204 ssh_runner.go:195] Run: cat /version.json
	I0717 01:20:26.243851   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:20:26.246315   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.246594   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.246677   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:26.246705   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.246811   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:20:26.246967   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:20:26.247100   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:26.247121   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:20:26.247124   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.247248   42204 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/multinode-025900/id_rsa Username:docker}
	I0717 01:20:26.247329   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:20:26.247485   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:20:26.247637   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:20:26.247775   42204 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/multinode-025900/id_rsa Username:docker}
	I0717 01:20:26.327482   42204 command_runner.go:130] > {"iso_version": "v1.33.1-1721146474-19264", "kicbase_version": "v0.0.44-1721064868-19249", "minikube_version": "v1.33.1", "commit": "6e0d7ef26437c947028f356d4449a323918e966e"}
	I0717 01:20:26.350274   42204 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 01:20:26.351235   42204 ssh_runner.go:195] Run: systemctl --version
	I0717 01:20:26.357005   42204 command_runner.go:130] > systemd 252 (252)
	I0717 01:20:26.357063   42204 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0717 01:20:26.357268   42204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:20:26.521979   42204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 01:20:26.528454   42204 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 01:20:26.528524   42204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:20:26.528592   42204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:20:26.538169   42204 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 01:20:26.538194   42204 start.go:495] detecting cgroup driver to use...
	I0717 01:20:26.538252   42204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:20:26.554046   42204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:20:26.568329   42204 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:20:26.568380   42204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:20:26.583108   42204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:20:26.597131   42204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:20:26.747821   42204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:20:26.890484   42204 docker.go:233] disabling docker service ...
	I0717 01:20:26.890560   42204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:20:26.908093   42204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:20:26.922120   42204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:20:27.063171   42204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:20:27.203982   42204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:20:27.218501   42204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:20:27.238127   42204 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 01:20:27.238717   42204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:20:27.238780   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.249699   42204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:20:27.249761   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.260861   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.271139   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.281781   42204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:20:27.292674   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.303107   42204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.314308   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.324522   42204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:20:27.333662   42204 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 01:20:27.333751   42204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:20:27.342969   42204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:20:27.484020   42204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:20:30.534979   42204 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.050927224s)
	I0717 01:20:30.535008   42204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:20:30.535062   42204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:20:30.539822   42204 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 01:20:30.539839   42204 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 01:20:30.539846   42204 command_runner.go:130] > Device: 0,22	Inode: 1341        Links: 1
	I0717 01:20:30.539853   42204 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 01:20:30.539858   42204 command_runner.go:130] > Access: 2024-07-17 01:20:30.413962085 +0000
	I0717 01:20:30.539892   42204 command_runner.go:130] > Modify: 2024-07-17 01:20:30.413962085 +0000
	I0717 01:20:30.539902   42204 command_runner.go:130] > Change: 2024-07-17 01:20:30.413962085 +0000
	I0717 01:20:30.539907   42204 command_runner.go:130] >  Birth: -
	I0717 01:20:30.540043   42204 start.go:563] Will wait 60s for crictl version
	I0717 01:20:30.540101   42204 ssh_runner.go:195] Run: which crictl
	I0717 01:20:30.543980   42204 command_runner.go:130] > /usr/bin/crictl
	I0717 01:20:30.544038   42204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:20:30.591683   42204 command_runner.go:130] > Version:  0.1.0
	I0717 01:20:30.591706   42204 command_runner.go:130] > RuntimeName:  cri-o
	I0717 01:20:30.591713   42204 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0717 01:20:30.591720   42204 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 01:20:30.593536   42204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:20:30.593598   42204 ssh_runner.go:195] Run: crio --version
	I0717 01:20:30.621749   42204 command_runner.go:130] > crio version 1.29.1
	I0717 01:20:30.621774   42204 command_runner.go:130] > Version:        1.29.1
	I0717 01:20:30.621787   42204 command_runner.go:130] > GitCommit:      unknown
	I0717 01:20:30.621794   42204 command_runner.go:130] > GitCommitDate:  unknown
	I0717 01:20:30.621800   42204 command_runner.go:130] > GitTreeState:   clean
	I0717 01:20:30.621812   42204 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 01:20:30.621820   42204 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 01:20:30.621825   42204 command_runner.go:130] > Compiler:       gc
	I0717 01:20:30.621832   42204 command_runner.go:130] > Platform:       linux/amd64
	I0717 01:20:30.621837   42204 command_runner.go:130] > Linkmode:       dynamic
	I0717 01:20:30.621845   42204 command_runner.go:130] > BuildTags:      
	I0717 01:20:30.621854   42204 command_runner.go:130] >   containers_image_ostree_stub
	I0717 01:20:30.621861   42204 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 01:20:30.621869   42204 command_runner.go:130] >   btrfs_noversion
	I0717 01:20:30.621877   42204 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 01:20:30.621884   42204 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 01:20:30.621893   42204 command_runner.go:130] >   seccomp
	I0717 01:20:30.621901   42204 command_runner.go:130] > LDFlags:          unknown
	I0717 01:20:30.621910   42204 command_runner.go:130] > SeccompEnabled:   true
	I0717 01:20:30.621918   42204 command_runner.go:130] > AppArmorEnabled:  false
	I0717 01:20:30.622970   42204 ssh_runner.go:195] Run: crio --version
	I0717 01:20:30.651933   42204 command_runner.go:130] > crio version 1.29.1
	I0717 01:20:30.651955   42204 command_runner.go:130] > Version:        1.29.1
	I0717 01:20:30.651960   42204 command_runner.go:130] > GitCommit:      unknown
	I0717 01:20:30.651964   42204 command_runner.go:130] > GitCommitDate:  unknown
	I0717 01:20:30.651968   42204 command_runner.go:130] > GitTreeState:   clean
	I0717 01:20:30.651974   42204 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 01:20:30.651978   42204 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 01:20:30.651982   42204 command_runner.go:130] > Compiler:       gc
	I0717 01:20:30.651986   42204 command_runner.go:130] > Platform:       linux/amd64
	I0717 01:20:30.651990   42204 command_runner.go:130] > Linkmode:       dynamic
	I0717 01:20:30.651994   42204 command_runner.go:130] > BuildTags:      
	I0717 01:20:30.651998   42204 command_runner.go:130] >   containers_image_ostree_stub
	I0717 01:20:30.652002   42204 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 01:20:30.652005   42204 command_runner.go:130] >   btrfs_noversion
	I0717 01:20:30.652009   42204 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 01:20:30.652013   42204 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 01:20:30.652021   42204 command_runner.go:130] >   seccomp
	I0717 01:20:30.652025   42204 command_runner.go:130] > LDFlags:          unknown
	I0717 01:20:30.652037   42204 command_runner.go:130] > SeccompEnabled:   true
	I0717 01:20:30.652042   42204 command_runner.go:130] > AppArmorEnabled:  false
	I0717 01:20:30.655384   42204 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:20:30.656700   42204 main.go:141] libmachine: (multinode-025900) Calling .GetIP
	I0717 01:20:30.659120   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:30.659523   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:30.659557   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:30.659739   42204 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:20:30.664199   42204 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 01:20:30.664283   42204 kubeadm.go:883] updating cluster {Name:multinode-025900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-025900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:20:30.664411   42204 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:20:30.664449   42204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:20:30.712650   42204 command_runner.go:130] > {
	I0717 01:20:30.712676   42204 command_runner.go:130] >   "images": [
	I0717 01:20:30.712681   42204 command_runner.go:130] >     {
	I0717 01:20:30.712689   42204 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 01:20:30.712694   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.712700   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 01:20:30.712704   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712708   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.712716   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 01:20:30.712725   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 01:20:30.712729   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712733   42204 command_runner.go:130] >       "size": "65908273",
	I0717 01:20:30.712742   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.712746   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.712754   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.712758   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.712762   42204 command_runner.go:130] >     },
	I0717 01:20:30.712764   42204 command_runner.go:130] >     {
	I0717 01:20:30.712770   42204 command_runner.go:130] >       "id": "a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda",
	I0717 01:20:30.712774   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.712779   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-f6ad1f6e"
	I0717 01:20:30.712783   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712787   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.712797   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381",
	I0717 01:20:30.712803   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:d61a2b3d0a49f21f2556f20ae629282e5b4076940972ac659d8cda1cdc6f9a20"
	I0717 01:20:30.712807   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712811   42204 command_runner.go:130] >       "size": "87166004",
	I0717 01:20:30.712817   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.712825   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.712832   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.712836   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.712840   42204 command_runner.go:130] >     },
	I0717 01:20:30.712843   42204 command_runner.go:130] >     {
	I0717 01:20:30.712849   42204 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 01:20:30.712854   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.712859   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 01:20:30.712864   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712868   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.712878   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 01:20:30.712885   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 01:20:30.712890   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712895   42204 command_runner.go:130] >       "size": "1363676",
	I0717 01:20:30.712899   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.712902   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.712906   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.712911   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.712914   42204 command_runner.go:130] >     },
	I0717 01:20:30.712917   42204 command_runner.go:130] >     {
	I0717 01:20:30.712923   42204 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 01:20:30.712929   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.712934   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 01:20:30.712939   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712943   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.712952   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 01:20:30.712964   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 01:20:30.712970   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712974   42204 command_runner.go:130] >       "size": "31470524",
	I0717 01:20:30.712980   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.712985   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.712997   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713001   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713007   42204 command_runner.go:130] >     },
	I0717 01:20:30.713018   42204 command_runner.go:130] >     {
	I0717 01:20:30.713025   42204 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 01:20:30.713031   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713037   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 01:20:30.713043   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713047   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713055   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 01:20:30.713065   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 01:20:30.713070   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713074   42204 command_runner.go:130] >       "size": "61245718",
	I0717 01:20:30.713078   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.713084   42204 command_runner.go:130] >       "username": "nonroot",
	I0717 01:20:30.713088   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713094   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713097   42204 command_runner.go:130] >     },
	I0717 01:20:30.713100   42204 command_runner.go:130] >     {
	I0717 01:20:30.713106   42204 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 01:20:30.713111   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713115   42204 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 01:20:30.713121   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713125   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713134   42204 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 01:20:30.713143   42204 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 01:20:30.713149   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713153   42204 command_runner.go:130] >       "size": "150779692",
	I0717 01:20:30.713159   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.713163   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.713169   42204 command_runner.go:130] >       },
	I0717 01:20:30.713173   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713179   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713183   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713188   42204 command_runner.go:130] >     },
	I0717 01:20:30.713191   42204 command_runner.go:130] >     {
	I0717 01:20:30.713198   42204 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 01:20:30.713204   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713209   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 01:20:30.713215   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713219   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713229   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 01:20:30.713238   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 01:20:30.713244   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713248   42204 command_runner.go:130] >       "size": "117609954",
	I0717 01:20:30.713253   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.713257   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.713263   42204 command_runner.go:130] >       },
	I0717 01:20:30.713267   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713271   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713275   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713279   42204 command_runner.go:130] >     },
	I0717 01:20:30.713284   42204 command_runner.go:130] >     {
	I0717 01:20:30.713290   42204 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 01:20:30.713296   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713301   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 01:20:30.713307   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713312   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713326   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 01:20:30.713336   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 01:20:30.713342   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713346   42204 command_runner.go:130] >       "size": "112194888",
	I0717 01:20:30.713352   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.713356   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.713362   42204 command_runner.go:130] >       },
	I0717 01:20:30.713366   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713370   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713373   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713377   42204 command_runner.go:130] >     },
	I0717 01:20:30.713379   42204 command_runner.go:130] >     {
	I0717 01:20:30.713385   42204 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 01:20:30.713389   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713393   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 01:20:30.713397   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713400   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713419   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 01:20:30.713427   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 01:20:30.713431   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713438   42204 command_runner.go:130] >       "size": "85953433",
	I0717 01:20:30.713442   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.713448   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713452   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713457   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713461   42204 command_runner.go:130] >     },
	I0717 01:20:30.713466   42204 command_runner.go:130] >     {
	I0717 01:20:30.713472   42204 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 01:20:30.713478   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713484   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 01:20:30.713489   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713493   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713502   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 01:20:30.713511   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 01:20:30.713514   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713521   42204 command_runner.go:130] >       "size": "63051080",
	I0717 01:20:30.713524   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.713530   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.713533   42204 command_runner.go:130] >       },
	I0717 01:20:30.713540   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713544   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713550   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713553   42204 command_runner.go:130] >     },
	I0717 01:20:30.713559   42204 command_runner.go:130] >     {
	I0717 01:20:30.713565   42204 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 01:20:30.713571   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713576   42204 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 01:20:30.713581   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713585   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713593   42204 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 01:20:30.713600   42204 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 01:20:30.713605   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713610   42204 command_runner.go:130] >       "size": "750414",
	I0717 01:20:30.713615   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.713619   42204 command_runner.go:130] >         "value": "65535"
	I0717 01:20:30.713625   42204 command_runner.go:130] >       },
	I0717 01:20:30.713629   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713635   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713639   42204 command_runner.go:130] >       "pinned": true
	I0717 01:20:30.713644   42204 command_runner.go:130] >     }
	I0717 01:20:30.713648   42204 command_runner.go:130] >   ]
	I0717 01:20:30.713653   42204 command_runner.go:130] > }
	I0717 01:20:30.713833   42204 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:20:30.713844   42204 crio.go:433] Images already preloaded, skipping extraction
	I0717 01:20:30.713906   42204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:20:30.748555   42204 command_runner.go:130] > {
	I0717 01:20:30.748572   42204 command_runner.go:130] >   "images": [
	I0717 01:20:30.748576   42204 command_runner.go:130] >     {
	I0717 01:20:30.748584   42204 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 01:20:30.748588   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.748594   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 01:20:30.748597   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748601   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.748616   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 01:20:30.748626   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 01:20:30.748634   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748641   42204 command_runner.go:130] >       "size": "65908273",
	I0717 01:20:30.748651   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.748657   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.748664   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.748671   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.748675   42204 command_runner.go:130] >     },
	I0717 01:20:30.748683   42204 command_runner.go:130] >     {
	I0717 01:20:30.748692   42204 command_runner.go:130] >       "id": "a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda",
	I0717 01:20:30.748699   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.748708   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-f6ad1f6e"
	I0717 01:20:30.748719   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748725   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.748734   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381",
	I0717 01:20:30.748741   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:d61a2b3d0a49f21f2556f20ae629282e5b4076940972ac659d8cda1cdc6f9a20"
	I0717 01:20:30.748746   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748750   42204 command_runner.go:130] >       "size": "87166004",
	I0717 01:20:30.748754   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.748768   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.748774   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.748778   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.748783   42204 command_runner.go:130] >     },
	I0717 01:20:30.748788   42204 command_runner.go:130] >     {
	I0717 01:20:30.748794   42204 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 01:20:30.748799   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.748804   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 01:20:30.748810   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748814   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.748821   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 01:20:30.748830   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 01:20:30.748835   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748839   42204 command_runner.go:130] >       "size": "1363676",
	I0717 01:20:30.748845   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.748849   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.748868   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.748874   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.748878   42204 command_runner.go:130] >     },
	I0717 01:20:30.748883   42204 command_runner.go:130] >     {
	I0717 01:20:30.748889   42204 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 01:20:30.748895   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.748900   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 01:20:30.748905   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748909   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.748918   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 01:20:30.748930   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 01:20:30.748936   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748940   42204 command_runner.go:130] >       "size": "31470524",
	I0717 01:20:30.748945   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.748951   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.748955   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.748961   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.748964   42204 command_runner.go:130] >     },
	I0717 01:20:30.748970   42204 command_runner.go:130] >     {
	I0717 01:20:30.748976   42204 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 01:20:30.748982   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.748987   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 01:20:30.748992   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748996   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749003   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 01:20:30.749011   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 01:20:30.749015   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749022   42204 command_runner.go:130] >       "size": "61245718",
	I0717 01:20:30.749026   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.749030   42204 command_runner.go:130] >       "username": "nonroot",
	I0717 01:20:30.749034   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749041   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749044   42204 command_runner.go:130] >     },
	I0717 01:20:30.749048   42204 command_runner.go:130] >     {
	I0717 01:20:30.749054   42204 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 01:20:30.749058   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749063   42204 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 01:20:30.749068   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749072   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749081   42204 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 01:20:30.749090   42204 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 01:20:30.749096   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749101   42204 command_runner.go:130] >       "size": "150779692",
	I0717 01:20:30.749106   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.749111   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.749119   42204 command_runner.go:130] >       },
	I0717 01:20:30.749123   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749129   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749133   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749139   42204 command_runner.go:130] >     },
	I0717 01:20:30.749143   42204 command_runner.go:130] >     {
	I0717 01:20:30.749151   42204 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 01:20:30.749157   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749162   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 01:20:30.749167   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749171   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749180   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 01:20:30.749189   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 01:20:30.749195   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749199   42204 command_runner.go:130] >       "size": "117609954",
	I0717 01:20:30.749206   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.749210   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.749216   42204 command_runner.go:130] >       },
	I0717 01:20:30.749220   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749225   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749229   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749235   42204 command_runner.go:130] >     },
	I0717 01:20:30.749238   42204 command_runner.go:130] >     {
	I0717 01:20:30.749246   42204 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 01:20:30.749251   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749256   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 01:20:30.749267   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749273   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749289   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 01:20:30.749299   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 01:20:30.749302   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749306   42204 command_runner.go:130] >       "size": "112194888",
	I0717 01:20:30.749312   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.749316   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.749322   42204 command_runner.go:130] >       },
	I0717 01:20:30.749326   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749332   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749336   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749342   42204 command_runner.go:130] >     },
	I0717 01:20:30.749345   42204 command_runner.go:130] >     {
	I0717 01:20:30.749356   42204 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 01:20:30.749360   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749368   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 01:20:30.749371   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749377   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749385   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 01:20:30.749396   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 01:20:30.749402   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749406   42204 command_runner.go:130] >       "size": "85953433",
	I0717 01:20:30.749412   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.749416   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749422   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749426   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749432   42204 command_runner.go:130] >     },
	I0717 01:20:30.749435   42204 command_runner.go:130] >     {
	I0717 01:20:30.749442   42204 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 01:20:30.749448   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749453   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 01:20:30.749458   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749462   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749472   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 01:20:30.749481   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 01:20:30.749487   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749491   42204 command_runner.go:130] >       "size": "63051080",
	I0717 01:20:30.749497   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.749501   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.749506   42204 command_runner.go:130] >       },
	I0717 01:20:30.749510   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749515   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749520   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749526   42204 command_runner.go:130] >     },
	I0717 01:20:30.749530   42204 command_runner.go:130] >     {
	I0717 01:20:30.749538   42204 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 01:20:30.749544   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749549   42204 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 01:20:30.749554   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749559   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749568   42204 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 01:20:30.749576   42204 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 01:20:30.749582   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749586   42204 command_runner.go:130] >       "size": "750414",
	I0717 01:20:30.749592   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.749596   42204 command_runner.go:130] >         "value": "65535"
	I0717 01:20:30.749602   42204 command_runner.go:130] >       },
	I0717 01:20:30.749606   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749612   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749616   42204 command_runner.go:130] >       "pinned": true
	I0717 01:20:30.749622   42204 command_runner.go:130] >     }
	I0717 01:20:30.749625   42204 command_runner.go:130] >   ]
	I0717 01:20:30.749630   42204 command_runner.go:130] > }
	I0717 01:20:30.749755   42204 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:20:30.749772   42204 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:20:30.749780   42204 kubeadm.go:934] updating node { 192.168.39.81 8443 v1.30.2 crio true true} ...
	I0717 01:20:30.749892   42204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-025900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-025900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:20:30.749959   42204 ssh_runner.go:195] Run: crio config
	I0717 01:20:30.782286   42204 command_runner.go:130] ! time="2024-07-17 01:20:30.766850106Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0717 01:20:30.788430   42204 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 01:20:30.798705   42204 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 01:20:30.798730   42204 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 01:20:30.798742   42204 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 01:20:30.798747   42204 command_runner.go:130] > #
	I0717 01:20:30.798754   42204 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 01:20:30.798760   42204 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 01:20:30.798766   42204 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 01:20:30.798773   42204 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 01:20:30.798777   42204 command_runner.go:130] > # reload'.
	I0717 01:20:30.798783   42204 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 01:20:30.798789   42204 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 01:20:30.798798   42204 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 01:20:30.798803   42204 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 01:20:30.798806   42204 command_runner.go:130] > [crio]
	I0717 01:20:30.798815   42204 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 01:20:30.798822   42204 command_runner.go:130] > # containers images, in this directory.
	I0717 01:20:30.798827   42204 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 01:20:30.798835   42204 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 01:20:30.798839   42204 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 01:20:30.798847   42204 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0717 01:20:30.798851   42204 command_runner.go:130] > # imagestore = ""
	I0717 01:20:30.798858   42204 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 01:20:30.798864   42204 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 01:20:30.798871   42204 command_runner.go:130] > storage_driver = "overlay"
	I0717 01:20:30.798876   42204 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 01:20:30.798884   42204 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 01:20:30.798898   42204 command_runner.go:130] > storage_option = [
	I0717 01:20:30.798904   42204 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 01:20:30.798907   42204 command_runner.go:130] > ]
	I0717 01:20:30.798916   42204 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 01:20:30.798922   42204 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 01:20:30.798928   42204 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 01:20:30.798933   42204 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 01:20:30.798941   42204 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 01:20:30.798948   42204 command_runner.go:130] > # always happen on a node reboot
	I0717 01:20:30.798952   42204 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 01:20:30.798963   42204 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 01:20:30.798970   42204 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 01:20:30.798975   42204 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 01:20:30.798981   42204 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0717 01:20:30.798988   42204 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 01:20:30.799001   42204 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 01:20:30.799008   42204 command_runner.go:130] > # internal_wipe = true
	I0717 01:20:30.799015   42204 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0717 01:20:30.799022   42204 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0717 01:20:30.799027   42204 command_runner.go:130] > # internal_repair = false
	I0717 01:20:30.799034   42204 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 01:20:30.799045   42204 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 01:20:30.799053   42204 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 01:20:30.799058   42204 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 01:20:30.799065   42204 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 01:20:30.799069   42204 command_runner.go:130] > [crio.api]
	I0717 01:20:30.799075   42204 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 01:20:30.799084   42204 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 01:20:30.799091   42204 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 01:20:30.799095   42204 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 01:20:30.799104   42204 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 01:20:30.799111   42204 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 01:20:30.799117   42204 command_runner.go:130] > # stream_port = "0"
	I0717 01:20:30.799122   42204 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 01:20:30.799128   42204 command_runner.go:130] > # stream_enable_tls = false
	I0717 01:20:30.799134   42204 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 01:20:30.799140   42204 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 01:20:30.799149   42204 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 01:20:30.799157   42204 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 01:20:30.799161   42204 command_runner.go:130] > # minutes.
	I0717 01:20:30.799165   42204 command_runner.go:130] > # stream_tls_cert = ""
	I0717 01:20:30.799171   42204 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 01:20:30.799179   42204 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 01:20:30.799186   42204 command_runner.go:130] > # stream_tls_key = ""
	I0717 01:20:30.799191   42204 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 01:20:30.799199   42204 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 01:20:30.799213   42204 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 01:20:30.799220   42204 command_runner.go:130] > # stream_tls_ca = ""
	I0717 01:20:30.799227   42204 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 01:20:30.799233   42204 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 01:20:30.799240   42204 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 01:20:30.799247   42204 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 01:20:30.799253   42204 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 01:20:30.799260   42204 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 01:20:30.799264   42204 command_runner.go:130] > [crio.runtime]
	I0717 01:20:30.799272   42204 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 01:20:30.799279   42204 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 01:20:30.799283   42204 command_runner.go:130] > # "nofile=1024:2048"
	I0717 01:20:30.799289   42204 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 01:20:30.799295   42204 command_runner.go:130] > # default_ulimits = [
	I0717 01:20:30.799299   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799307   42204 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 01:20:30.799313   42204 command_runner.go:130] > # no_pivot = false
	I0717 01:20:30.799318   42204 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 01:20:30.799324   42204 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 01:20:30.799330   42204 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 01:20:30.799336   42204 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 01:20:30.799343   42204 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 01:20:30.799350   42204 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 01:20:30.799357   42204 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 01:20:30.799361   42204 command_runner.go:130] > # Cgroup setting for conmon
	I0717 01:20:30.799370   42204 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 01:20:30.799377   42204 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 01:20:30.799382   42204 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 01:20:30.799389   42204 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 01:20:30.799397   42204 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 01:20:30.799403   42204 command_runner.go:130] > conmon_env = [
	I0717 01:20:30.799408   42204 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 01:20:30.799411   42204 command_runner.go:130] > ]
	I0717 01:20:30.799418   42204 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 01:20:30.799423   42204 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 01:20:30.799431   42204 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 01:20:30.799435   42204 command_runner.go:130] > # default_env = [
	I0717 01:20:30.799440   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799445   42204 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 01:20:30.799454   42204 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0717 01:20:30.799460   42204 command_runner.go:130] > # selinux = false
	I0717 01:20:30.799466   42204 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 01:20:30.799473   42204 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 01:20:30.799481   42204 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 01:20:30.799485   42204 command_runner.go:130] > # seccomp_profile = ""
	I0717 01:20:30.799491   42204 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 01:20:30.799496   42204 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 01:20:30.799504   42204 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 01:20:30.799509   42204 command_runner.go:130] > # which might increase security.
	I0717 01:20:30.799517   42204 command_runner.go:130] > # This option is currently deprecated,
	I0717 01:20:30.799525   42204 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0717 01:20:30.799531   42204 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 01:20:30.799537   42204 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 01:20:30.799546   42204 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 01:20:30.799554   42204 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 01:20:30.799563   42204 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 01:20:30.799570   42204 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:20:30.799575   42204 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 01:20:30.799582   42204 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 01:20:30.799589   42204 command_runner.go:130] > # the cgroup blockio controller.
	I0717 01:20:30.799593   42204 command_runner.go:130] > # blockio_config_file = ""
	I0717 01:20:30.799601   42204 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0717 01:20:30.799606   42204 command_runner.go:130] > # blockio parameters.
	I0717 01:20:30.799610   42204 command_runner.go:130] > # blockio_reload = false
	I0717 01:20:30.799618   42204 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 01:20:30.799624   42204 command_runner.go:130] > # irqbalance daemon.
	I0717 01:20:30.799629   42204 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 01:20:30.799642   42204 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0717 01:20:30.799651   42204 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0717 01:20:30.799657   42204 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0717 01:20:30.799665   42204 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0717 01:20:30.799672   42204 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 01:20:30.799679   42204 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:20:30.799684   42204 command_runner.go:130] > # rdt_config_file = ""
	I0717 01:20:30.799691   42204 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 01:20:30.799695   42204 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 01:20:30.799716   42204 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 01:20:30.799724   42204 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 01:20:30.799730   42204 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 01:20:30.799736   42204 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 01:20:30.799741   42204 command_runner.go:130] > # will be added.
	I0717 01:20:30.799747   42204 command_runner.go:130] > # default_capabilities = [
	I0717 01:20:30.799753   42204 command_runner.go:130] > # 	"CHOWN",
	I0717 01:20:30.799757   42204 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 01:20:30.799763   42204 command_runner.go:130] > # 	"FSETID",
	I0717 01:20:30.799767   42204 command_runner.go:130] > # 	"FOWNER",
	I0717 01:20:30.799773   42204 command_runner.go:130] > # 	"SETGID",
	I0717 01:20:30.799777   42204 command_runner.go:130] > # 	"SETUID",
	I0717 01:20:30.799784   42204 command_runner.go:130] > # 	"SETPCAP",
	I0717 01:20:30.799788   42204 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 01:20:30.799793   42204 command_runner.go:130] > # 	"KILL",
	I0717 01:20:30.799797   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799806   42204 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 01:20:30.799814   42204 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 01:20:30.799819   42204 command_runner.go:130] > # add_inheritable_capabilities = false
	I0717 01:20:30.799824   42204 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 01:20:30.799832   42204 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 01:20:30.799838   42204 command_runner.go:130] > default_sysctls = [
	I0717 01:20:30.799843   42204 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0717 01:20:30.799848   42204 command_runner.go:130] > ]
	I0717 01:20:30.799853   42204 command_runner.go:130] > # List of devices on the host that a
	I0717 01:20:30.799861   42204 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 01:20:30.799866   42204 command_runner.go:130] > # allowed_devices = [
	I0717 01:20:30.799870   42204 command_runner.go:130] > # 	"/dev/fuse",
	I0717 01:20:30.799875   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799880   42204 command_runner.go:130] > # List of additional devices. specified as
	I0717 01:20:30.799889   42204 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 01:20:30.799895   42204 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 01:20:30.799903   42204 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 01:20:30.799909   42204 command_runner.go:130] > # additional_devices = [
	I0717 01:20:30.799912   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799919   42204 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 01:20:30.799923   42204 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 01:20:30.799928   42204 command_runner.go:130] > # 	"/etc/cdi",
	I0717 01:20:30.799932   42204 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 01:20:30.799937   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799943   42204 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 01:20:30.799951   42204 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 01:20:30.799958   42204 command_runner.go:130] > # Defaults to false.
	I0717 01:20:30.799963   42204 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 01:20:30.799971   42204 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 01:20:30.799979   42204 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 01:20:30.799984   42204 command_runner.go:130] > # hooks_dir = [
	I0717 01:20:30.799990   42204 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 01:20:30.799998   42204 command_runner.go:130] > # ]
	I0717 01:20:30.800005   42204 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 01:20:30.800011   42204 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 01:20:30.800019   42204 command_runner.go:130] > # its default mounts from the following two files:
	I0717 01:20:30.800022   42204 command_runner.go:130] > #
	I0717 01:20:30.800027   42204 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 01:20:30.800034   42204 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 01:20:30.800041   42204 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 01:20:30.800044   42204 command_runner.go:130] > #
	I0717 01:20:30.800050   42204 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 01:20:30.800058   42204 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 01:20:30.800064   42204 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 01:20:30.800071   42204 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 01:20:30.800074   42204 command_runner.go:130] > #
	I0717 01:20:30.800078   42204 command_runner.go:130] > # default_mounts_file = ""
	I0717 01:20:30.800085   42204 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 01:20:30.800091   42204 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 01:20:30.800097   42204 command_runner.go:130] > pids_limit = 1024
	I0717 01:20:30.800103   42204 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 01:20:30.800110   42204 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 01:20:30.800118   42204 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 01:20:30.800127   42204 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 01:20:30.800132   42204 command_runner.go:130] > # log_size_max = -1
	I0717 01:20:30.800139   42204 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0717 01:20:30.800148   42204 command_runner.go:130] > # log_to_journald = false
	I0717 01:20:30.800156   42204 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 01:20:30.800161   42204 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 01:20:30.800168   42204 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 01:20:30.800173   42204 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 01:20:30.800180   42204 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 01:20:30.800186   42204 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 01:20:30.800191   42204 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 01:20:30.800197   42204 command_runner.go:130] > # read_only = false
	I0717 01:20:30.800203   42204 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 01:20:30.800211   42204 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 01:20:30.800218   42204 command_runner.go:130] > # live configuration reload.
	I0717 01:20:30.800222   42204 command_runner.go:130] > # log_level = "info"
	I0717 01:20:30.800229   42204 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 01:20:30.800234   42204 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:20:30.800240   42204 command_runner.go:130] > # log_filter = ""
	I0717 01:20:30.800245   42204 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 01:20:30.800254   42204 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 01:20:30.800260   42204 command_runner.go:130] > # separated by comma.
	I0717 01:20:30.800268   42204 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:20:30.800274   42204 command_runner.go:130] > # uid_mappings = ""
	I0717 01:20:30.800280   42204 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 01:20:30.800287   42204 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 01:20:30.800293   42204 command_runner.go:130] > # separated by comma.
	I0717 01:20:30.800300   42204 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:20:30.800306   42204 command_runner.go:130] > # gid_mappings = ""
	I0717 01:20:30.800312   42204 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 01:20:30.800320   42204 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 01:20:30.800326   42204 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 01:20:30.800333   42204 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:20:30.800339   42204 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 01:20:30.800345   42204 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 01:20:30.800353   42204 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 01:20:30.800362   42204 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 01:20:30.800371   42204 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:20:30.800379   42204 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 01:20:30.800387   42204 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 01:20:30.800393   42204 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 01:20:30.800401   42204 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 01:20:30.800407   42204 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 01:20:30.800412   42204 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 01:20:30.800418   42204 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 01:20:30.800424   42204 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 01:20:30.800429   42204 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 01:20:30.800435   42204 command_runner.go:130] > drop_infra_ctr = false
	I0717 01:20:30.800440   42204 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 01:20:30.800448   42204 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 01:20:30.800456   42204 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 01:20:30.800462   42204 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 01:20:30.800469   42204 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0717 01:20:30.800477   42204 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0717 01:20:30.800484   42204 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0717 01:20:30.800492   42204 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0717 01:20:30.800495   42204 command_runner.go:130] > # shared_cpuset = ""
	I0717 01:20:30.800501   42204 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 01:20:30.800508   42204 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 01:20:30.800512   42204 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 01:20:30.800521   42204 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 01:20:30.800525   42204 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 01:20:30.800532   42204 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0717 01:20:30.800538   42204 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0717 01:20:30.800545   42204 command_runner.go:130] > # enable_criu_support = false
	I0717 01:20:30.800550   42204 command_runner.go:130] > # Enable/disable the generation of the container,
	I0717 01:20:30.800558   42204 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0717 01:20:30.800562   42204 command_runner.go:130] > # enable_pod_events = false
	I0717 01:20:30.800568   42204 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 01:20:30.800574   42204 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 01:20:30.800581   42204 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0717 01:20:30.800585   42204 command_runner.go:130] > # default_runtime = "runc"
	I0717 01:20:30.800590   42204 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 01:20:30.800598   42204 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 01:20:30.800608   42204 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0717 01:20:30.800622   42204 command_runner.go:130] > # creation as a file is not desired either.
	I0717 01:20:30.800631   42204 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 01:20:30.800638   42204 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 01:20:30.800643   42204 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 01:20:30.800648   42204 command_runner.go:130] > # ]
	I0717 01:20:30.800654   42204 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 01:20:30.800660   42204 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 01:20:30.800667   42204 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0717 01:20:30.800673   42204 command_runner.go:130] > # Each entry in the table should follow the format:
	I0717 01:20:30.800678   42204 command_runner.go:130] > #
	I0717 01:20:30.800682   42204 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0717 01:20:30.800689   42204 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0717 01:20:30.800707   42204 command_runner.go:130] > # runtime_type = "oci"
	I0717 01:20:30.800713   42204 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0717 01:20:30.800718   42204 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0717 01:20:30.800724   42204 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0717 01:20:30.800729   42204 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0717 01:20:30.800733   42204 command_runner.go:130] > # monitor_env = []
	I0717 01:20:30.800737   42204 command_runner.go:130] > # privileged_without_host_devices = false
	I0717 01:20:30.800741   42204 command_runner.go:130] > # allowed_annotations = []
	I0717 01:20:30.800748   42204 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0717 01:20:30.800753   42204 command_runner.go:130] > # Where:
	I0717 01:20:30.800758   42204 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0717 01:20:30.800766   42204 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0717 01:20:30.800772   42204 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 01:20:30.800779   42204 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 01:20:30.800783   42204 command_runner.go:130] > #   in $PATH.
	I0717 01:20:30.800789   42204 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0717 01:20:30.800796   42204 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 01:20:30.800802   42204 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0717 01:20:30.800807   42204 command_runner.go:130] > #   state.
	I0717 01:20:30.800813   42204 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 01:20:30.800821   42204 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 01:20:30.800827   42204 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 01:20:30.800835   42204 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 01:20:30.800840   42204 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 01:20:30.800849   42204 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 01:20:30.800855   42204 command_runner.go:130] > #   The currently recognized values are:
	I0717 01:20:30.800864   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 01:20:30.800870   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 01:20:30.800878   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 01:20:30.800884   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 01:20:30.800893   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 01:20:30.800901   42204 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 01:20:30.800908   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0717 01:20:30.800916   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0717 01:20:30.800922   42204 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 01:20:30.800931   42204 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0717 01:20:30.800935   42204 command_runner.go:130] > #   deprecated option "conmon".
	I0717 01:20:30.800942   42204 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0717 01:20:30.800949   42204 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0717 01:20:30.800956   42204 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0717 01:20:30.800962   42204 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 01:20:30.800969   42204 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0717 01:20:30.800975   42204 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0717 01:20:30.800982   42204 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0717 01:20:30.800989   42204 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0717 01:20:30.800992   42204 command_runner.go:130] > #
	I0717 01:20:30.800999   42204 command_runner.go:130] > # Using the seccomp notifier feature:
	I0717 01:20:30.801002   42204 command_runner.go:130] > #
	I0717 01:20:30.801007   42204 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0717 01:20:30.801014   42204 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0717 01:20:30.801017   42204 command_runner.go:130] > #
	I0717 01:20:30.801023   42204 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0717 01:20:30.801030   42204 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0717 01:20:30.801033   42204 command_runner.go:130] > #
	I0717 01:20:30.801039   42204 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0717 01:20:30.801043   42204 command_runner.go:130] > # feature.
	I0717 01:20:30.801046   42204 command_runner.go:130] > #
	I0717 01:20:30.801054   42204 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0717 01:20:30.801060   42204 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0717 01:20:30.801066   42204 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0717 01:20:30.801074   42204 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0717 01:20:30.801081   42204 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0717 01:20:30.801084   42204 command_runner.go:130] > #
	I0717 01:20:30.801090   42204 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0717 01:20:30.801097   42204 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0717 01:20:30.801100   42204 command_runner.go:130] > #
	I0717 01:20:30.801106   42204 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0717 01:20:30.801112   42204 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0717 01:20:30.801115   42204 command_runner.go:130] > #
	I0717 01:20:30.801121   42204 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0717 01:20:30.801130   42204 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0717 01:20:30.801134   42204 command_runner.go:130] > # limitation.
	I0717 01:20:30.801141   42204 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 01:20:30.801145   42204 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 01:20:30.801152   42204 command_runner.go:130] > runtime_type = "oci"
	I0717 01:20:30.801156   42204 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 01:20:30.801160   42204 command_runner.go:130] > runtime_config_path = ""
	I0717 01:20:30.801167   42204 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0717 01:20:30.801171   42204 command_runner.go:130] > monitor_cgroup = "pod"
	I0717 01:20:30.801174   42204 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 01:20:30.801178   42204 command_runner.go:130] > monitor_env = [
	I0717 01:20:30.801184   42204 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 01:20:30.801189   42204 command_runner.go:130] > ]
	I0717 01:20:30.801194   42204 command_runner.go:130] > privileged_without_host_devices = false
	I0717 01:20:30.801200   42204 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 01:20:30.801207   42204 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 01:20:30.801213   42204 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 01:20:30.801222   42204 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 01:20:30.801229   42204 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 01:20:30.801236   42204 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 01:20:30.801245   42204 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 01:20:30.801254   42204 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 01:20:30.801260   42204 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 01:20:30.801266   42204 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 01:20:30.801269   42204 command_runner.go:130] > # Example:
	I0717 01:20:30.801273   42204 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 01:20:30.801278   42204 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 01:20:30.801282   42204 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 01:20:30.801289   42204 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 01:20:30.801292   42204 command_runner.go:130] > # cpuset = 0
	I0717 01:20:30.801296   42204 command_runner.go:130] > # cpushares = "0-1"
	I0717 01:20:30.801299   42204 command_runner.go:130] > # Where:
	I0717 01:20:30.801303   42204 command_runner.go:130] > # The workload name is workload-type.
	I0717 01:20:30.801309   42204 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 01:20:30.801314   42204 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 01:20:30.801319   42204 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 01:20:30.801329   42204 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 01:20:30.801335   42204 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 01:20:30.801340   42204 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0717 01:20:30.801348   42204 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0717 01:20:30.801353   42204 command_runner.go:130] > # Default value is set to true
	I0717 01:20:30.801359   42204 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0717 01:20:30.801365   42204 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0717 01:20:30.801370   42204 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0717 01:20:30.801374   42204 command_runner.go:130] > # Default value is set to 'false'
	I0717 01:20:30.801379   42204 command_runner.go:130] > # disable_hostport_mapping = false
	I0717 01:20:30.801384   42204 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 01:20:30.801389   42204 command_runner.go:130] > #
	I0717 01:20:30.801395   42204 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 01:20:30.801403   42204 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 01:20:30.801409   42204 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 01:20:30.801417   42204 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 01:20:30.801422   42204 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 01:20:30.801428   42204 command_runner.go:130] > [crio.image]
	I0717 01:20:30.801434   42204 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 01:20:30.801440   42204 command_runner.go:130] > # default_transport = "docker://"
	I0717 01:20:30.801446   42204 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 01:20:30.801454   42204 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 01:20:30.801461   42204 command_runner.go:130] > # global_auth_file = ""
	I0717 01:20:30.801466   42204 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 01:20:30.801473   42204 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:20:30.801477   42204 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0717 01:20:30.801485   42204 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 01:20:30.801493   42204 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 01:20:30.801499   42204 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:20:30.801505   42204 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 01:20:30.801513   42204 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 01:20:30.801520   42204 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 01:20:30.801526   42204 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 01:20:30.801533   42204 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 01:20:30.801540   42204 command_runner.go:130] > # pause_command = "/pause"
	I0717 01:20:30.801546   42204 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0717 01:20:30.801554   42204 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0717 01:20:30.801562   42204 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0717 01:20:30.801571   42204 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0717 01:20:30.801579   42204 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0717 01:20:30.801585   42204 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0717 01:20:30.801591   42204 command_runner.go:130] > # pinned_images = [
	I0717 01:20:30.801594   42204 command_runner.go:130] > # ]
	I0717 01:20:30.801602   42204 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 01:20:30.801609   42204 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 01:20:30.801617   42204 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 01:20:30.801626   42204 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 01:20:30.801633   42204 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 01:20:30.801637   42204 command_runner.go:130] > # signature_policy = ""
	I0717 01:20:30.801644   42204 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0717 01:20:30.801651   42204 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0717 01:20:30.801659   42204 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0717 01:20:30.801665   42204 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0717 01:20:30.801671   42204 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0717 01:20:30.801677   42204 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0717 01:20:30.801683   42204 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 01:20:30.801691   42204 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 01:20:30.801696   42204 command_runner.go:130] > # changing them here.
	I0717 01:20:30.801700   42204 command_runner.go:130] > # insecure_registries = [
	I0717 01:20:30.801705   42204 command_runner.go:130] > # ]
	I0717 01:20:30.801710   42204 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 01:20:30.801717   42204 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 01:20:30.801721   42204 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 01:20:30.801728   42204 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 01:20:30.801732   42204 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 01:20:30.801743   42204 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 01:20:30.801749   42204 command_runner.go:130] > # CNI plugins.
	I0717 01:20:30.801753   42204 command_runner.go:130] > [crio.network]
	I0717 01:20:30.801759   42204 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 01:20:30.801767   42204 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 01:20:30.801773   42204 command_runner.go:130] > # cni_default_network = ""
	I0717 01:20:30.801779   42204 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 01:20:30.801785   42204 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 01:20:30.801791   42204 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 01:20:30.801796   42204 command_runner.go:130] > # plugin_dirs = [
	I0717 01:20:30.801800   42204 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 01:20:30.801805   42204 command_runner.go:130] > # ]
	I0717 01:20:30.801811   42204 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 01:20:30.801817   42204 command_runner.go:130] > [crio.metrics]
	I0717 01:20:30.801822   42204 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 01:20:30.801828   42204 command_runner.go:130] > enable_metrics = true
	I0717 01:20:30.801832   42204 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 01:20:30.801838   42204 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 01:20:30.801844   42204 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 01:20:30.801852   42204 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 01:20:30.801860   42204 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 01:20:30.801864   42204 command_runner.go:130] > # metrics_collectors = [
	I0717 01:20:30.801869   42204 command_runner.go:130] > # 	"operations",
	I0717 01:20:30.801874   42204 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 01:20:30.801880   42204 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 01:20:30.801884   42204 command_runner.go:130] > # 	"operations_errors",
	I0717 01:20:30.801890   42204 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 01:20:30.801894   42204 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 01:20:30.801901   42204 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 01:20:30.801905   42204 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 01:20:30.801912   42204 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 01:20:30.801915   42204 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 01:20:30.801919   42204 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 01:20:30.801926   42204 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0717 01:20:30.801930   42204 command_runner.go:130] > # 	"containers_oom_total",
	I0717 01:20:30.801935   42204 command_runner.go:130] > # 	"containers_oom",
	I0717 01:20:30.801939   42204 command_runner.go:130] > # 	"processes_defunct",
	I0717 01:20:30.801944   42204 command_runner.go:130] > # 	"operations_total",
	I0717 01:20:30.801949   42204 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 01:20:30.801955   42204 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 01:20:30.801960   42204 command_runner.go:130] > # 	"operations_errors_total",
	I0717 01:20:30.801966   42204 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 01:20:30.801971   42204 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 01:20:30.801977   42204 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 01:20:30.801981   42204 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 01:20:30.801990   42204 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 01:20:30.801998   42204 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 01:20:30.802003   42204 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0717 01:20:30.802007   42204 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0717 01:20:30.802010   42204 command_runner.go:130] > # ]
	I0717 01:20:30.802015   42204 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 01:20:30.802021   42204 command_runner.go:130] > # metrics_port = 9090
	I0717 01:20:30.802026   42204 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 01:20:30.802029   42204 command_runner.go:130] > # metrics_socket = ""
	I0717 01:20:30.802034   42204 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 01:20:30.802041   42204 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 01:20:30.802046   42204 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 01:20:30.802053   42204 command_runner.go:130] > # certificate on any modification event.
	I0717 01:20:30.802057   42204 command_runner.go:130] > # metrics_cert = ""
	I0717 01:20:30.802063   42204 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 01:20:30.802068   42204 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 01:20:30.802073   42204 command_runner.go:130] > # metrics_key = ""
	I0717 01:20:30.802079   42204 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 01:20:30.802084   42204 command_runner.go:130] > [crio.tracing]
	I0717 01:20:30.802089   42204 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 01:20:30.802095   42204 command_runner.go:130] > # enable_tracing = false
	I0717 01:20:30.802100   42204 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 01:20:30.802104   42204 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 01:20:30.802113   42204 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0717 01:20:30.802118   42204 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 01:20:30.802122   42204 command_runner.go:130] > # CRI-O NRI configuration.
	I0717 01:20:30.802128   42204 command_runner.go:130] > [crio.nri]
	I0717 01:20:30.802132   42204 command_runner.go:130] > # Globally enable or disable NRI.
	I0717 01:20:30.802138   42204 command_runner.go:130] > # enable_nri = false
	I0717 01:20:30.802142   42204 command_runner.go:130] > # NRI socket to listen on.
	I0717 01:20:30.802149   42204 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0717 01:20:30.802153   42204 command_runner.go:130] > # NRI plugin directory to use.
	I0717 01:20:30.802158   42204 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0717 01:20:30.802165   42204 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0717 01:20:30.802170   42204 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0717 01:20:30.802178   42204 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0717 01:20:30.802185   42204 command_runner.go:130] > # nri_disable_connections = false
	I0717 01:20:30.802190   42204 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0717 01:20:30.802196   42204 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0717 01:20:30.802201   42204 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0717 01:20:30.802208   42204 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0717 01:20:30.802214   42204 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 01:20:30.802220   42204 command_runner.go:130] > [crio.stats]
	I0717 01:20:30.802227   42204 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 01:20:30.802235   42204 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 01:20:30.802239   42204 command_runner.go:130] > # stats_collection_period = 0
	I0717 01:20:30.802331   42204 cni.go:84] Creating CNI manager for ""
	I0717 01:20:30.802341   42204 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 01:20:30.802349   42204 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:20:30.802374   42204 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-025900 NodeName:multinode-025900 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:20:30.802499   42204 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-025900"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:20:30.802582   42204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:20:30.813056   42204 command_runner.go:130] > kubeadm
	I0717 01:20:30.813076   42204 command_runner.go:130] > kubectl
	I0717 01:20:30.813082   42204 command_runner.go:130] > kubelet
	I0717 01:20:30.813194   42204 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:20:30.813243   42204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:20:30.823013   42204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0717 01:20:30.840213   42204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:20:30.857351   42204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 01:20:30.874806   42204 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I0717 01:20:30.878627   42204 command_runner.go:130] > 192.168.39.81	control-plane.minikube.internal
	I0717 01:20:30.878795   42204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:20:31.020748   42204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:20:31.036441   42204 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900 for IP: 192.168.39.81
	I0717 01:20:31.036468   42204 certs.go:194] generating shared ca certs ...
	I0717 01:20:31.036489   42204 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:20:31.036655   42204 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:20:31.036695   42204 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:20:31.036707   42204 certs.go:256] generating profile certs ...
	I0717 01:20:31.036797   42204 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/client.key
	I0717 01:20:31.036861   42204 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/apiserver.key.8d5dc9e3
	I0717 01:20:31.036894   42204 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/proxy-client.key
	I0717 01:20:31.036904   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 01:20:31.036917   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 01:20:31.036930   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 01:20:31.036950   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 01:20:31.036962   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 01:20:31.036979   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 01:20:31.036997   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 01:20:31.037014   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 01:20:31.037087   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:20:31.037117   42204 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:20:31.037126   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:20:31.037147   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:20:31.037168   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:20:31.037190   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:20:31.037224   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:20:31.037248   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:20:31.037262   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem -> /usr/share/ca-certificates/11259.pem
	I0717 01:20:31.037273   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /usr/share/ca-certificates/112592.pem
	I0717 01:20:31.037775   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:20:31.064090   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:20:31.090542   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:20:31.116251   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:20:31.140711   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:20:31.164856   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:20:31.188861   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:20:31.213583   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:20:31.238396   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:20:31.261949   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:20:31.285860   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:20:31.310140   42204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:20:31.327300   42204 ssh_runner.go:195] Run: openssl version
	I0717 01:20:31.333664   42204 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 01:20:31.333780   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:20:31.345376   42204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:20:31.349784   42204 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:20:31.349929   42204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:20:31.349976   42204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:20:31.355819   42204 command_runner.go:130] > b5213941
	I0717 01:20:31.355991   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:20:31.365918   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:20:31.377802   42204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:20:31.382529   42204 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:20:31.382576   42204 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:20:31.382637   42204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:20:31.388480   42204 command_runner.go:130] > 51391683
	I0717 01:20:31.388766   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:20:31.398637   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:20:31.409979   42204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:20:31.414470   42204 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:20:31.414539   42204 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:20:31.414605   42204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:20:31.420263   42204 command_runner.go:130] > 3ec20f2e
	I0717 01:20:31.420436   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:20:31.430096   42204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:20:31.434880   42204 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:20:31.434901   42204 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 01:20:31.434908   42204 command_runner.go:130] > Device: 253,1	Inode: 8386581     Links: 1
	I0717 01:20:31.434914   42204 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 01:20:31.434923   42204 command_runner.go:130] > Access: 2024-07-17 01:13:30.823315093 +0000
	I0717 01:20:31.434931   42204 command_runner.go:130] > Modify: 2024-07-17 01:13:30.823315093 +0000
	I0717 01:20:31.434938   42204 command_runner.go:130] > Change: 2024-07-17 01:13:30.823315093 +0000
	I0717 01:20:31.434950   42204 command_runner.go:130] >  Birth: 2024-07-17 01:13:30.823315093 +0000
	I0717 01:20:31.435016   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:20:31.440658   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.440723   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:20:31.446130   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.446335   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:20:31.452921   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.453068   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:20:31.458989   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.459055   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:20:31.464853   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.464897   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:20:31.470386   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.470512   42204 kubeadm.go:392] StartCluster: {Name:multinode-025900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-025900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:20:31.470639   42204 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:20:31.470689   42204 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:20:31.505756   42204 command_runner.go:130] > 09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8
	I0717 01:20:31.505789   42204 command_runner.go:130] > 3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d
	I0717 01:20:31.505809   42204 command_runner.go:130] > fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779
	I0717 01:20:31.505816   42204 command_runner.go:130] > 02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e
	I0717 01:20:31.505821   42204 command_runner.go:130] > f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31
	I0717 01:20:31.505826   42204 command_runner.go:130] > 1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2
	I0717 01:20:31.505831   42204 command_runner.go:130] > 1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71
	I0717 01:20:31.505837   42204 command_runner.go:130] > d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994
	I0717 01:20:31.507116   42204 cri.go:89] found id: "09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8"
	I0717 01:20:31.507129   42204 cri.go:89] found id: "3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d"
	I0717 01:20:31.507133   42204 cri.go:89] found id: "fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779"
	I0717 01:20:31.507136   42204 cri.go:89] found id: "02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e"
	I0717 01:20:31.507138   42204 cri.go:89] found id: "f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31"
	I0717 01:20:31.507141   42204 cri.go:89] found id: "1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2"
	I0717 01:20:31.507143   42204 cri.go:89] found id: "1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71"
	I0717 01:20:31.507146   42204 cri.go:89] found id: "d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994"
	I0717 01:20:31.507148   42204 cri.go:89] found id: ""
	I0717 01:20:31.507200   42204 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.920273702Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179338920248094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e087286f-7128-48ac-aef9-6eb7273b9c7e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.920920499Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c85f402-7a5b-4617-a62a-52b95fddfd16 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.921036316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c85f402-7a5b-4617-a62a-52b95fddfd16 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.921390594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4b91a7f92bd2fab7abc5dd20ef35cc488727485a19ef84870f125c6900538b1,PodSandboxId:8f4b5614ab6e635cea43b197b327cbb9231cb3d101d191e13116bbc0ee1d1114,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721179271593826847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c33ce626336d2cbcd0200ceeb7fb464edcbbe4d98a471a5ada1e15c5dbf7e,PodSandboxId:6331990a218a184e85a8322811b5e7d10082cecb13938a7db44012a4914969a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721179238016431494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1178a3606a57c649c0f34b3a539ae72cfdaced76f1baca5b6b08b8493427335c,PodSandboxId:6fad3987071e446c4850c541a9edbf67d3b7b00008e4a2393bc38a3bcf229748,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179237928657323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7408f53ed54301d1ac909f70778685b3042b8f99f9ca891315130401239235,PodSandboxId:e4eff30722fe1c57b83b34507c6725ffd1c2065be392ce3e7e38d526ca3640ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179237851863073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b97863bc065363089a16f8ab329bc942107d351093a8a1cd48a377dbf1cf32,PodSandboxId:afd20e90b3502dd39dff3b56b6c4978650b72b92ab61a833b4806c4ccf616c4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179237857532686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775f970094737d7af3e03852693d060924c7798b3121a5be1210cb187117137,PodSandboxId:12d81e4cf6ee75f7c3f501f74a6aeb46b9b4d78e3c598c79ac6a86f994817792,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179234022727843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[string]string{io.kubernetes.container.hash: 49104185,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cc3d795f24816fab224839ea0e4f603d36a19e8f3b7b4a6bff27f7f2e32ee5,PodSandboxId:c35c8141a83adc5ba5ec15c3bc351ca26767958334210cbe036ad5bcae2e16f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179234005390458,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60bf9ba16ac61e0757355fbfca3e6bf4087417422c5e90eba9a6093060e0be0,PodSandboxId:7b21db87518f2d2c0acf5704263823bfc094b15d3020377262433e10bffe4642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179233972811787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fefc78079ae534d983463192be87a831caf618e85aaa27deed9ecb982be3216,PodSandboxId:7b3b2d201398feb1445a2b65456e9754b2e66a4a4d0f666c7f08033312985f8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179233945542834,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502dbe19e4152058e60707c6dfc875ca3187f80daa847ab208774f39d5df7037,PodSandboxId:148c1844263fa4291996fd06546ed76a4dc8bcd305ddec01f7a1e0ec8e8ef0d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721178909765522325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8,PodSandboxId:08cf884b4c94ddd4e0ecf91bc231859f112239948e12c9c2c53f1a8afb0641dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721178850624937144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d,PodSandboxId:45e8a55750e51f0a82a8688a98011d3bdad1a577d85b20f77c08255c77bb3080,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721178850558973294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779,PodSandboxId:f4485605cb6b057913de77d2598ad7c1132d494b61d7be36429abb860f022e7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721178838531620319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e,PodSandboxId:609ddc457d2600a8a307931c800e315694698abe9654ba6fa096f7b1a59117f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721178833677729246,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2,PodSandboxId:26b1f925c432f0dcfbb5bac0724a8d35261cfb567f21218c298fa532f93e4170,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721178814665534609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31,PodSandboxId:80a245704d7e983897bf4d03765b9bec645a9821e01e6b46e4a4ab394b8c93d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721178814668736209,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994,PodSandboxId:be47f38f9c9b81c7f8f1d028facde10189ab6f32400892697aba65b0c0dba416,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721178814660148682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56
003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71,PodSandboxId:36406e5752bc67f659c92e9244c3dc19aa05828ba343dba359963fd666e8bcc3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721178814663751962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 49104185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c85f402-7a5b-4617-a62a-52b95fddfd16 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.967285377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd961dd0-52b0-4452-962e-ee762f08ac0f name=/runtime.v1.RuntimeService/Version
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.967640145Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd961dd0-52b0-4452-962e-ee762f08ac0f name=/runtime.v1.RuntimeService/Version
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.969118386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f90fc27c-a021-45aa-8e91-470ece0741db name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.969565788Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179338969543281,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f90fc27c-a021-45aa-8e91-470ece0741db name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.970159955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=388aefc0-301d-41a2-a689-e57b4f03ab89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.970231055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=388aefc0-301d-41a2-a689-e57b4f03ab89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:18 multinode-025900 crio[2861]: time="2024-07-17 01:22:18.970558545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4b91a7f92bd2fab7abc5dd20ef35cc488727485a19ef84870f125c6900538b1,PodSandboxId:8f4b5614ab6e635cea43b197b327cbb9231cb3d101d191e13116bbc0ee1d1114,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721179271593826847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c33ce626336d2cbcd0200ceeb7fb464edcbbe4d98a471a5ada1e15c5dbf7e,PodSandboxId:6331990a218a184e85a8322811b5e7d10082cecb13938a7db44012a4914969a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721179238016431494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1178a3606a57c649c0f34b3a539ae72cfdaced76f1baca5b6b08b8493427335c,PodSandboxId:6fad3987071e446c4850c541a9edbf67d3b7b00008e4a2393bc38a3bcf229748,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179237928657323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7408f53ed54301d1ac909f70778685b3042b8f99f9ca891315130401239235,PodSandboxId:e4eff30722fe1c57b83b34507c6725ffd1c2065be392ce3e7e38d526ca3640ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179237851863073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b97863bc065363089a16f8ab329bc942107d351093a8a1cd48a377dbf1cf32,PodSandboxId:afd20e90b3502dd39dff3b56b6c4978650b72b92ab61a833b4806c4ccf616c4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179237857532686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775f970094737d7af3e03852693d060924c7798b3121a5be1210cb187117137,PodSandboxId:12d81e4cf6ee75f7c3f501f74a6aeb46b9b4d78e3c598c79ac6a86f994817792,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179234022727843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[string]string{io.kubernetes.container.hash: 49104185,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cc3d795f24816fab224839ea0e4f603d36a19e8f3b7b4a6bff27f7f2e32ee5,PodSandboxId:c35c8141a83adc5ba5ec15c3bc351ca26767958334210cbe036ad5bcae2e16f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179234005390458,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60bf9ba16ac61e0757355fbfca3e6bf4087417422c5e90eba9a6093060e0be0,PodSandboxId:7b21db87518f2d2c0acf5704263823bfc094b15d3020377262433e10bffe4642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179233972811787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fefc78079ae534d983463192be87a831caf618e85aaa27deed9ecb982be3216,PodSandboxId:7b3b2d201398feb1445a2b65456e9754b2e66a4a4d0f666c7f08033312985f8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179233945542834,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502dbe19e4152058e60707c6dfc875ca3187f80daa847ab208774f39d5df7037,PodSandboxId:148c1844263fa4291996fd06546ed76a4dc8bcd305ddec01f7a1e0ec8e8ef0d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721178909765522325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8,PodSandboxId:08cf884b4c94ddd4e0ecf91bc231859f112239948e12c9c2c53f1a8afb0641dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721178850624937144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d,PodSandboxId:45e8a55750e51f0a82a8688a98011d3bdad1a577d85b20f77c08255c77bb3080,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721178850558973294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779,PodSandboxId:f4485605cb6b057913de77d2598ad7c1132d494b61d7be36429abb860f022e7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721178838531620319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e,PodSandboxId:609ddc457d2600a8a307931c800e315694698abe9654ba6fa096f7b1a59117f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721178833677729246,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2,PodSandboxId:26b1f925c432f0dcfbb5bac0724a8d35261cfb567f21218c298fa532f93e4170,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721178814665534609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31,PodSandboxId:80a245704d7e983897bf4d03765b9bec645a9821e01e6b46e4a4ab394b8c93d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721178814668736209,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994,PodSandboxId:be47f38f9c9b81c7f8f1d028facde10189ab6f32400892697aba65b0c0dba416,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721178814660148682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56
003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71,PodSandboxId:36406e5752bc67f659c92e9244c3dc19aa05828ba343dba359963fd666e8bcc3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721178814663751962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 49104185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=388aefc0-301d-41a2-a689-e57b4f03ab89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.013838777Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea9887d7-532d-4995-a52f-41d801d85914 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.013942664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea9887d7-532d-4995-a52f-41d801d85914 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.015177328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8fde49e-1d8c-4922-ac7b-559f470fcfcc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.015632863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179339015608320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8fde49e-1d8c-4922-ac7b-559f470fcfcc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.016221978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85673cf8-b4ec-45d5-b209-bedbee2b099c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.016307190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85673cf8-b4ec-45d5-b209-bedbee2b099c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.016644569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4b91a7f92bd2fab7abc5dd20ef35cc488727485a19ef84870f125c6900538b1,PodSandboxId:8f4b5614ab6e635cea43b197b327cbb9231cb3d101d191e13116bbc0ee1d1114,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721179271593826847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c33ce626336d2cbcd0200ceeb7fb464edcbbe4d98a471a5ada1e15c5dbf7e,PodSandboxId:6331990a218a184e85a8322811b5e7d10082cecb13938a7db44012a4914969a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721179238016431494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1178a3606a57c649c0f34b3a539ae72cfdaced76f1baca5b6b08b8493427335c,PodSandboxId:6fad3987071e446c4850c541a9edbf67d3b7b00008e4a2393bc38a3bcf229748,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179237928657323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7408f53ed54301d1ac909f70778685b3042b8f99f9ca891315130401239235,PodSandboxId:e4eff30722fe1c57b83b34507c6725ffd1c2065be392ce3e7e38d526ca3640ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179237851863073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b97863bc065363089a16f8ab329bc942107d351093a8a1cd48a377dbf1cf32,PodSandboxId:afd20e90b3502dd39dff3b56b6c4978650b72b92ab61a833b4806c4ccf616c4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179237857532686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775f970094737d7af3e03852693d060924c7798b3121a5be1210cb187117137,PodSandboxId:12d81e4cf6ee75f7c3f501f74a6aeb46b9b4d78e3c598c79ac6a86f994817792,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179234022727843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[string]string{io.kubernetes.container.hash: 49104185,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cc3d795f24816fab224839ea0e4f603d36a19e8f3b7b4a6bff27f7f2e32ee5,PodSandboxId:c35c8141a83adc5ba5ec15c3bc351ca26767958334210cbe036ad5bcae2e16f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179234005390458,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60bf9ba16ac61e0757355fbfca3e6bf4087417422c5e90eba9a6093060e0be0,PodSandboxId:7b21db87518f2d2c0acf5704263823bfc094b15d3020377262433e10bffe4642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179233972811787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fefc78079ae534d983463192be87a831caf618e85aaa27deed9ecb982be3216,PodSandboxId:7b3b2d201398feb1445a2b65456e9754b2e66a4a4d0f666c7f08033312985f8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179233945542834,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502dbe19e4152058e60707c6dfc875ca3187f80daa847ab208774f39d5df7037,PodSandboxId:148c1844263fa4291996fd06546ed76a4dc8bcd305ddec01f7a1e0ec8e8ef0d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721178909765522325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8,PodSandboxId:08cf884b4c94ddd4e0ecf91bc231859f112239948e12c9c2c53f1a8afb0641dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721178850624937144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d,PodSandboxId:45e8a55750e51f0a82a8688a98011d3bdad1a577d85b20f77c08255c77bb3080,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721178850558973294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779,PodSandboxId:f4485605cb6b057913de77d2598ad7c1132d494b61d7be36429abb860f022e7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721178838531620319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e,PodSandboxId:609ddc457d2600a8a307931c800e315694698abe9654ba6fa096f7b1a59117f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721178833677729246,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2,PodSandboxId:26b1f925c432f0dcfbb5bac0724a8d35261cfb567f21218c298fa532f93e4170,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721178814665534609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31,PodSandboxId:80a245704d7e983897bf4d03765b9bec645a9821e01e6b46e4a4ab394b8c93d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721178814668736209,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994,PodSandboxId:be47f38f9c9b81c7f8f1d028facde10189ab6f32400892697aba65b0c0dba416,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721178814660148682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56
003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71,PodSandboxId:36406e5752bc67f659c92e9244c3dc19aa05828ba343dba359963fd666e8bcc3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721178814663751962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 49104185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85673cf8-b4ec-45d5-b209-bedbee2b099c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.065844900Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19029ff3-517e-4408-ba02-b5cef25f3380 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.065938483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19029ff3-517e-4408-ba02-b5cef25f3380 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.067092645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72023469-fcb0-482f-b8fd-1da3fd3a3f23 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.067511102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179339067489987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72023469-fcb0-482f-b8fd-1da3fd3a3f23 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.068091267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfbf9f7e-1aba-47db-ac77-405b5eff8dbf name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.068164060Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfbf9f7e-1aba-47db-ac77-405b5eff8dbf name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:22:19 multinode-025900 crio[2861]: time="2024-07-17 01:22:19.068506097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4b91a7f92bd2fab7abc5dd20ef35cc488727485a19ef84870f125c6900538b1,PodSandboxId:8f4b5614ab6e635cea43b197b327cbb9231cb3d101d191e13116bbc0ee1d1114,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721179271593826847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c33ce626336d2cbcd0200ceeb7fb464edcbbe4d98a471a5ada1e15c5dbf7e,PodSandboxId:6331990a218a184e85a8322811b5e7d10082cecb13938a7db44012a4914969a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721179238016431494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1178a3606a57c649c0f34b3a539ae72cfdaced76f1baca5b6b08b8493427335c,PodSandboxId:6fad3987071e446c4850c541a9edbf67d3b7b00008e4a2393bc38a3bcf229748,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179237928657323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7408f53ed54301d1ac909f70778685b3042b8f99f9ca891315130401239235,PodSandboxId:e4eff30722fe1c57b83b34507c6725ffd1c2065be392ce3e7e38d526ca3640ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179237851863073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b97863bc065363089a16f8ab329bc942107d351093a8a1cd48a377dbf1cf32,PodSandboxId:afd20e90b3502dd39dff3b56b6c4978650b72b92ab61a833b4806c4ccf616c4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179237857532686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775f970094737d7af3e03852693d060924c7798b3121a5be1210cb187117137,PodSandboxId:12d81e4cf6ee75f7c3f501f74a6aeb46b9b4d78e3c598c79ac6a86f994817792,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179234022727843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[string]string{io.kubernetes.container.hash: 49104185,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cc3d795f24816fab224839ea0e4f603d36a19e8f3b7b4a6bff27f7f2e32ee5,PodSandboxId:c35c8141a83adc5ba5ec15c3bc351ca26767958334210cbe036ad5bcae2e16f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179234005390458,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60bf9ba16ac61e0757355fbfca3e6bf4087417422c5e90eba9a6093060e0be0,PodSandboxId:7b21db87518f2d2c0acf5704263823bfc094b15d3020377262433e10bffe4642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179233972811787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fefc78079ae534d983463192be87a831caf618e85aaa27deed9ecb982be3216,PodSandboxId:7b3b2d201398feb1445a2b65456e9754b2e66a4a4d0f666c7f08033312985f8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179233945542834,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502dbe19e4152058e60707c6dfc875ca3187f80daa847ab208774f39d5df7037,PodSandboxId:148c1844263fa4291996fd06546ed76a4dc8bcd305ddec01f7a1e0ec8e8ef0d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721178909765522325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8,PodSandboxId:08cf884b4c94ddd4e0ecf91bc231859f112239948e12c9c2c53f1a8afb0641dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721178850624937144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d,PodSandboxId:45e8a55750e51f0a82a8688a98011d3bdad1a577d85b20f77c08255c77bb3080,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721178850558973294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779,PodSandboxId:f4485605cb6b057913de77d2598ad7c1132d494b61d7be36429abb860f022e7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721178838531620319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e,PodSandboxId:609ddc457d2600a8a307931c800e315694698abe9654ba6fa096f7b1a59117f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721178833677729246,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2,PodSandboxId:26b1f925c432f0dcfbb5bac0724a8d35261cfb567f21218c298fa532f93e4170,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721178814665534609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31,PodSandboxId:80a245704d7e983897bf4d03765b9bec645a9821e01e6b46e4a4ab394b8c93d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721178814668736209,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994,PodSandboxId:be47f38f9c9b81c7f8f1d028facde10189ab6f32400892697aba65b0c0dba416,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721178814660148682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56
003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71,PodSandboxId:36406e5752bc67f659c92e9244c3dc19aa05828ba343dba359963fd666e8bcc3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721178814663751962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 49104185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfbf9f7e-1aba-47db-ac77-405b5eff8dbf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d4b91a7f92bd2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   8f4b5614ab6e6       busybox-fc5497c4f-mn98f
	452c33ce62633       a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda                                      About a minute ago   Running             kindnet-cni               1                   6331990a218a1       kindnet-97pxj
	1178a3606a57c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   6fad3987071e4       coredns-7db6d8ff4d-g4xjh
	c0b97863bc065       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   afd20e90b3502       storage-provisioner
	6f7408f53ed54       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      About a minute ago   Running             kube-proxy                1                   e4eff30722fe1       kube-proxy-4qbwm
	a775f97009473       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   12d81e4cf6ee7       etcd-multinode-025900
	c9cc3d795f248       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      About a minute ago   Running             kube-scheduler            1                   c35c8141a83ad       kube-scheduler-multinode-025900
	a60bf9ba16ac6       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            1                   7b21db87518f2       kube-apiserver-multinode-025900
	8fefc78079ae5       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   1                   7b3b2d201398f       kube-controller-manager-multinode-025900
	502dbe19e4152       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   148c1844263fa       busybox-fc5497c4f-mn98f
	09725ffcca266       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   08cf884b4c94d       coredns-7db6d8ff4d-g4xjh
	3ebdfeb8ae5c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   45e8a55750e51       storage-provisioner
	fee30035a5397       docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381    8 minutes ago        Exited              kindnet-cni               0                   f4485605cb6b0       kindnet-97pxj
	02159611beb77       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      8 minutes ago        Exited              kube-proxy                0                   609ddc457d260       kube-proxy-4qbwm
	f7539247491f8       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      8 minutes ago        Exited              kube-apiserver            0                   80a245704d7e9       kube-apiserver-multinode-025900
	1dac5d8c8d8c1       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      8 minutes ago        Exited              kube-controller-manager   0                   26b1f925c432f       kube-controller-manager-multinode-025900
	1925767a2697d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   36406e5752bc6       etcd-multinode-025900
	d35d12e08dd5f       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      8 minutes ago        Exited              kube-scheduler            0                   be47f38f9c9b8       kube-scheduler-multinode-025900
	
	
	==> coredns [09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8] <==
	[INFO] 10.244.1.2:57488 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001825991s
	[INFO] 10.244.1.2:43947 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212704s
	[INFO] 10.244.1.2:48553 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006526s
	[INFO] 10.244.1.2:47808 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001206902s
	[INFO] 10.244.1.2:38784 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010314s
	[INFO] 10.244.1.2:49953 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101727s
	[INFO] 10.244.1.2:45251 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059416s
	[INFO] 10.244.0.3:51683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074265s
	[INFO] 10.244.0.3:42518 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084171s
	[INFO] 10.244.0.3:56977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032928s
	[INFO] 10.244.0.3:51607 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000033581s
	[INFO] 10.244.1.2:45479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180609s
	[INFO] 10.244.1.2:35269 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127074s
	[INFO] 10.244.1.2:46044 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116711s
	[INFO] 10.244.1.2:59620 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081133s
	[INFO] 10.244.0.3:46330 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150543s
	[INFO] 10.244.0.3:34563 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132773s
	[INFO] 10.244.0.3:44980 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125994s
	[INFO] 10.244.0.3:34736 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142877s
	[INFO] 10.244.1.2:33226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018839s
	[INFO] 10.244.1.2:41501 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082637s
	[INFO] 10.244.1.2:58921 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112066s
	[INFO] 10.244.1.2:47694 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075927s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1178a3606a57c649c0f34b3a539ae72cfdaced76f1baca5b6b08b8493427335c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44723 - 5105 "HINFO IN 4128149899772464554.5178966456455149612. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010460164s
	
	
	==> describe nodes <==
	Name:               multinode-025900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=multinode-025900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_13_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:13:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025900
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:22:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:20:37 +0000   Wed, 17 Jul 2024 01:13:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:20:37 +0000   Wed, 17 Jul 2024 01:13:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:20:37 +0000   Wed, 17 Jul 2024 01:13:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:20:37 +0000   Wed, 17 Jul 2024 01:14:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.81
	  Hostname:    multinode-025900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3070bef5ea84fdf8ae62c5daaef29b1
	  System UUID:                e3070bef-5ea8-4fdf-8ae6-2c5daaef29b1
	  Boot ID:                    6fec25f7-991b-4a4b-ba54-36a13a7c7a24
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mn98f                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 coredns-7db6d8ff4d-g4xjh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m26s
	  kube-system                 etcd-multinode-025900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m39s
	  kube-system                 kindnet-97pxj                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m26s
	  kube-system                 kube-apiserver-multinode-025900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-controller-manager-multinode-025900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-proxy-4qbwm                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-scheduler-multinode-025900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m25s                  kube-proxy       
	  Normal  Starting                 101s                   kube-proxy       
	  Normal  Starting                 8m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m45s (x8 over 8m45s)  kubelet          Node multinode-025900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m45s (x8 over 8m45s)  kubelet          Node multinode-025900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m45s (x7 over 8m45s)  kubelet          Node multinode-025900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m40s                  kubelet          Node multinode-025900 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m40s                  kubelet          Node multinode-025900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m40s                  kubelet          Node multinode-025900 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m40s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m27s                  node-controller  Node multinode-025900 event: Registered Node multinode-025900 in Controller
	  Normal  NodeReady                8m9s                   kubelet          Node multinode-025900 status is now: NodeReady
	  Normal  Starting                 106s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)    kubelet          Node multinode-025900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)    kubelet          Node multinode-025900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 106s)    kubelet          Node multinode-025900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s                    node-controller  Node multinode-025900 event: Registered Node multinode-025900 in Controller
	
	
	Name:               multinode-025900-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025900-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=multinode-025900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T01_21_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:21:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025900-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:22:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:21:47 +0000   Wed, 17 Jul 2024 01:21:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:21:47 +0000   Wed, 17 Jul 2024 01:21:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:21:47 +0000   Wed, 17 Jul 2024 01:21:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:21:47 +0000   Wed, 17 Jul 2024 01:21:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    multinode-025900-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ada8e85345f492e837685dd748ab793
	  System UUID:                7ada8e85-345f-492e-8376-85dd748ab793
	  Boot ID:                    0436973a-0d30-4ddd-b109-fbfc40815289
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w4x47    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kindnet-hj4p6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m38s
	  kube-system                 kube-proxy-mhxlb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m32s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m38s (x2 over 7m38s)  kubelet     Node multinode-025900-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s (x2 over 7m38s)  kubelet     Node multinode-025900-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s (x2 over 7m38s)  kubelet     Node multinode-025900-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m16s                  kubelet     Node multinode-025900-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-025900-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-025900-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-025900-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-025900-m02 status is now: NodeReady
	
	
	Name:               multinode-025900-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025900-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=multinode-025900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T01_21_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:21:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025900-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:22:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:22:16 +0000   Wed, 17 Jul 2024 01:21:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:22:16 +0000   Wed, 17 Jul 2024 01:21:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:22:16 +0000   Wed, 17 Jul 2024 01:21:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:22:16 +0000   Wed, 17 Jul 2024 01:22:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    multinode-025900-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7145aa965f4940b0acf10e3b62145687
	  System UUID:                7145aa96-5f49-40b0-acf1-0e3b62145687
	  Boot ID:                    47620651-d994-410d-b0a4-fd7c9025669b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cft79       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m40s
	  kube-system                 kube-proxy-kspmt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m35s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m45s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m41s (x2 over 6m41s)  kubelet     Node multinode-025900-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x2 over 6m41s)  kubelet     Node multinode-025900-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x2 over 6m41s)  kubelet     Node multinode-025900-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m19s                  kubelet     Node multinode-025900-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m50s (x2 over 5m50s)  kubelet     Node multinode-025900-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m50s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m50s (x2 over 5m50s)  kubelet     Node multinode-025900-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m50s (x2 over 5m50s)  kubelet     Node multinode-025900-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m30s                  kubelet     Node multinode-025900-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-025900-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-025900-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-025900-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-025900-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.059920] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068363] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.176641] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.144391] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.279478] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.097374] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.617116] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.069110] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.008456] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.088066] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.028616] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.124424] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	[  +5.779757] kauditd_printk_skb: 51 callbacks suppressed
	[Jul17 01:15] kauditd_printk_skb: 14 callbacks suppressed
	[Jul17 01:20] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.149349] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.168334] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.147617] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.280321] systemd-fstab-generator[2847]: Ignoring "noauto" option for root device
	[  +3.528721] systemd-fstab-generator[2945]: Ignoring "noauto" option for root device
	[  +2.093347] systemd-fstab-generator[3069]: Ignoring "noauto" option for root device
	[  +0.083975] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.081982] kauditd_printk_skb: 87 callbacks suppressed
	[ +14.308454] systemd-fstab-generator[3894]: Ignoring "noauto" option for root device
	[Jul17 01:21] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71] <==
	{"level":"info","ts":"2024-07-17T01:13:35.126307Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:13:35.12777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.81:2379"}
	{"level":"info","ts":"2024-07-17T01:13:35.130681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:13:35.143003Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:13:35.143118Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-07-17T01:14:41.889681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.050743ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17368009634777542372 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:710790be407a2ee3>","response":"size:42"}
	{"level":"info","ts":"2024-07-17T01:14:41.889902Z","caller":"traceutil/trace.go:171","msg":"trace[1934674145] linearizableReadLoop","detail":"{readStateIndex:517; appliedIndex:515; }","duration":"177.500515ms","start":"2024-07-17T01:14:41.712383Z","end":"2024-07-17T01:14:41.889884Z","steps":["trace[1934674145] 'read index received'  (duration: 14.155138ms)","trace[1934674145] 'applied index is now lower than readState.Index'  (duration: 163.344814ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T01:14:41.890214Z","caller":"traceutil/trace.go:171","msg":"trace[469549090] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"177.885337ms","start":"2024-07-17T01:14:41.712319Z","end":"2024-07-17T01:14:41.890205Z","steps":["trace[469549090] 'process raft request'  (duration: 177.500159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:14:41.890444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.034491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-025900-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-17T01:14:41.890495Z","caller":"traceutil/trace.go:171","msg":"trace[432564414] range","detail":"{range_begin:/registry/minions/multinode-025900-m02; range_end:; response_count:1; response_revision:495; }","duration":"178.117195ms","start":"2024-07-17T01:14:41.712367Z","end":"2024-07-17T01:14:41.890484Z","steps":["trace[432564414] 'agreement among raft nodes before linearized reading'  (duration: 177.99274ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:15:39.173832Z","caller":"traceutil/trace.go:171","msg":"trace[1389313653] linearizableReadLoop","detail":"{readStateIndex:672; appliedIndex:671; }","duration":"214.88295ms","start":"2024-07-17T01:15:38.958905Z","end":"2024-07-17T01:15:39.173788Z","steps":["trace[1389313653] 'read index received'  (duration: 213.855902ms)","trace[1389313653] 'applied index is now lower than readState.Index'  (duration: 1.026355ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:15:39.17417Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.192276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-025900-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:15:39.174253Z","caller":"traceutil/trace.go:171","msg":"trace[1103145786] range","detail":"{range_begin:/registry/minions/multinode-025900-m03; range_end:; response_count:0; response_revision:632; }","duration":"215.352856ms","start":"2024-07-17T01:15:38.95888Z","end":"2024-07-17T01:15:39.174233Z","steps":["trace[1103145786] 'agreement among raft nodes before linearized reading'  (duration: 215.159508ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:15:39.174493Z","caller":"traceutil/trace.go:171","msg":"trace[1831000673] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"239.131206ms","start":"2024-07-17T01:15:38.93535Z","end":"2024-07-17T01:15:39.174481Z","steps":["trace[1831000673] 'process raft request'  (duration: 237.40356ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:15:39.174873Z","caller":"traceutil/trace.go:171","msg":"trace[452073269] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"177.360496ms","start":"2024-07-17T01:15:38.997497Z","end":"2024-07-17T01:15:39.174858Z","steps":["trace[452073269] 'process raft request'  (duration: 176.653439ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:18:55.343157Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T01:18:55.343265Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-025900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"]}
	{"level":"warn","ts":"2024-07-17T01:18:55.343356Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.81:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T01:18:55.343395Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.81:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T01:18:55.343468Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T01:18:55.343536Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T01:18:55.431648Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"81f5d9acb096f107","current-leader-member-id":"81f5d9acb096f107"}
	{"level":"info","ts":"2024-07-17T01:18:55.434166Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-07-17T01:18:55.434303Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-07-17T01:18:55.434333Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-025900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"]}
	
	
	==> etcd [a775f970094737d7af3e03852693d060924c7798b3121a5be1210cb187117137] <==
	{"level":"info","ts":"2024-07-17T01:20:34.556462Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:20:34.556545Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:20:34.557137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 switched to configuration voters=(9364630335907098887)"}
	{"level":"info","ts":"2024-07-17T01:20:34.55728Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","added-peer-id":"81f5d9acb096f107","added-peer-peer-urls":["https://192.168.39.81:2380"]}
	{"level":"info","ts":"2024-07-17T01:20:34.557562Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:20:34.560039Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:20:34.587723Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:20:34.59118Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-07-17T01:20:34.591237Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-07-17T01:20:34.591416Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"81f5d9acb096f107","initial-advertise-peer-urls":["https://192.168.39.81:2380"],"listen-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.81:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:20:34.591458Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:20:35.865185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:20:35.86529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:20:35.865331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgPreVoteResp from 81f5d9acb096f107 at term 2"}
	{"level":"info","ts":"2024-07-17T01:20:35.865361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:20:35.865395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgVoteResp from 81f5d9acb096f107 at term 3"}
	{"level":"info","ts":"2024-07-17T01:20:35.865423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:20:35.865453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81f5d9acb096f107 elected leader 81f5d9acb096f107 at term 3"}
	{"level":"info","ts":"2024-07-17T01:20:35.87059Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"81f5d9acb096f107","local-member-attributes":"{Name:multinode-025900 ClientURLs:[https://192.168.39.81:2379]}","request-path":"/0/members/81f5d9acb096f107/attributes","cluster-id":"a77bf2d9a9fbb59e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:20:35.870673Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:20:35.870746Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:20:35.871485Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:20:35.871524Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:20:35.873452Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.81:2379"}
	{"level":"info","ts":"2024-07-17T01:20:35.873457Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:22:19 up 9 min,  0 users,  load average: 1.23, 0.51, 0.23
	Linux multinode-025900 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [452c33ce626336d2cbcd0200ceeb7fb464edcbbe4d98a471a5ada1e15c5dbf7e] <==
	I0717 01:21:39.063631       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	I0717 01:21:49.063219       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:21:49.063494       1 main.go:303] handling current node
	I0717 01:21:49.063549       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:21:49.063579       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:21:49.063750       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:21:49.063866       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	I0717 01:21:59.063018       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:21:59.063164       1 main.go:303] handling current node
	I0717 01:21:59.063235       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:21:59.063278       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:21:59.063473       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:21:59.063532       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.2.0/24] 
	I0717 01:22:09.063629       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:22:09.063798       1 main.go:303] handling current node
	I0717 01:22:09.063832       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:22:09.063891       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:22:09.064135       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:22:09.064197       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.2.0/24] 
	I0717 01:22:19.065072       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:22:19.065102       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.2.0/24] 
	I0717 01:22:19.065311       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:22:19.065323       1 main.go:303] handling current node
	I0717 01:22:19.065334       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:22:19.065338       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779] <==
	I0717 01:18:09.554501       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	I0717 01:18:19.553118       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:18:19.553222       1 main.go:303] handling current node
	I0717 01:18:19.553242       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:18:19.553258       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:18:19.553447       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:18:19.553482       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	I0717 01:18:29.551734       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:18:29.551861       1 main.go:303] handling current node
	I0717 01:18:29.551894       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:18:29.551900       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:18:29.552101       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:18:29.552123       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	I0717 01:18:39.556859       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:18:39.557023       1 main.go:303] handling current node
	I0717 01:18:39.557057       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:18:39.557076       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:18:39.557199       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:18:39.557223       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	I0717 01:18:49.560044       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:18:49.560197       1 main.go:303] handling current node
	I0717 01:18:49.560234       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:18:49.560254       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:18:49.560420       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:18:49.560445       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a60bf9ba16ac61e0757355fbfca3e6bf4087417422c5e90eba9a6093060e0be0] <==
	I0717 01:20:37.222209       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 01:20:37.222669       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 01:20:37.223215       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 01:20:37.222908       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 01:20:37.223726       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 01:20:37.223796       1 aggregator.go:165] initial CRD sync complete...
	I0717 01:20:37.223817       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 01:20:37.223822       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 01:20:37.223827       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:20:37.228346       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:20:37.228593       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 01:20:37.229162       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:20:37.231270       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 01:20:37.236753       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 01:20:37.236792       1 policy_source.go:224] refreshing policies
	E0717 01:20:37.240117       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0717 01:20:37.248563       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:20:38.128651       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:20:39.163719       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:20:39.331691       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:20:39.351326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:20:39.432750       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:20:39.439496       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:20:50.218793       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:20:50.260431       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31] <==
	W0717 01:18:55.367156       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367211       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367603       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367720       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367795       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367869       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367930       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.368554       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.368846       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.368916       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369063       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369126       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369235       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369379       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369421       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369486       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369537       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369567       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369813       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367625       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369684       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.370071       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.370325       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.370399       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.371023       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2] <==
	I0717 01:14:41.892635       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025900-m02\" does not exist"
	I0717 01:14:41.951373       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025900-m02" podCIDRs=["10.244.1.0/24"]
	I0717 01:14:42.250279       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025900-m02"
	I0717 01:15:03.142876       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:15:05.498398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.013537ms"
	I0717 01:15:05.522407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.882551ms"
	I0717 01:15:05.546545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.879671ms"
	I0717 01:15:05.546633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.189µs"
	I0717 01:15:10.288832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.971202ms"
	I0717 01:15:10.288913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.823µs"
	I0717 01:15:10.748660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.467465ms"
	I0717 01:15:10.749417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.091µs"
	I0717 01:15:39.178534       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:15:39.181085       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025900-m03\" does not exist"
	I0717 01:15:39.209515       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025900-m03" podCIDRs=["10.244.2.0/24"]
	I0717 01:15:42.277033       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025900-m03"
	I0717 01:16:00.350798       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:16:28.931873       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:16:29.903093       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025900-m03\" does not exist"
	I0717 01:16:29.903182       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:16:29.912484       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025900-m03" podCIDRs=["10.244.3.0/24"]
	I0717 01:16:49.757064       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:17:27.336519       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m03"
	I0717 01:17:27.385920       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.57673ms"
	I0717 01:17:27.386357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="182.934µs"
	
	
	==> kube-controller-manager [8fefc78079ae534d983463192be87a831caf618e85aaa27deed9ecb982be3216] <==
	I0717 01:20:50.457334       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:20:50.860879       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:20:50.861040       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 01:20:50.908251       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:21:12.932829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.344135ms"
	I0717 01:21:12.944931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.782787ms"
	I0717 01:21:12.945449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="117.007µs"
	I0717 01:21:17.140941       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025900-m02\" does not exist"
	I0717 01:21:17.149569       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025900-m02" podCIDRs=["10.244.1.0/24"]
	I0717 01:21:19.056540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.844µs"
	I0717 01:21:19.068853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.082µs"
	I0717 01:21:19.106797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.571µs"
	I0717 01:21:19.115830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.152µs"
	I0717 01:21:19.120536       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.431µs"
	I0717 01:21:20.113567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.482µs"
	I0717 01:21:37.489920       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:21:37.518551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="107.056µs"
	I0717 01:21:37.535574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.066µs"
	I0717 01:21:42.460178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.65861ms"
	I0717 01:21:42.460499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.886µs"
	I0717 01:21:55.820610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:21:57.127577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:21:57.127701       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025900-m03\" does not exist"
	I0717 01:21:57.145555       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025900-m03" podCIDRs=["10.244.2.0/24"]
	I0717 01:22:16.067524       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	
	
	==> kube-proxy [02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e] <==
	I0717 01:13:53.946142       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:13:53.972503       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.81"]
	I0717 01:13:54.066058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:13:54.066247       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:13:54.066273       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:13:54.069595       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:13:54.069925       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:13:54.070078       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:13:54.071122       1 config.go:192] "Starting service config controller"
	I0717 01:13:54.071152       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:13:54.071176       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:13:54.071180       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:13:54.071611       1 config.go:319] "Starting node config controller"
	I0717 01:13:54.071651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:13:54.171323       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:13:54.171380       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:13:54.171829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [6f7408f53ed54301d1ac909f70778685b3042b8f99f9ca891315130401239235] <==
	I0717 01:20:38.169315       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:20:38.187036       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.81"]
	I0717 01:20:38.258089       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:20:38.258122       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:20:38.258138       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:20:38.264713       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:20:38.272521       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:20:38.272538       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:20:38.281052       1 config.go:192] "Starting service config controller"
	I0717 01:20:38.284058       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:20:38.284275       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:20:38.284321       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:20:38.284861       1 config.go:319] "Starting node config controller"
	I0717 01:20:38.285657       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:20:38.384492       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:20:38.384668       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:20:38.386395       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c9cc3d795f24816fab224839ea0e4f603d36a19e8f3b7b4a6bff27f7f2e32ee5] <==
	I0717 01:20:35.028848       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:20:37.184211       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:20:37.184439       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:20:37.184564       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:20:37.184593       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:20:37.201536       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:20:37.201576       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:20:37.203194       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:20:37.203417       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:20:37.203454       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:20:37.203487       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:20:37.303831       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994] <==
	E0717 01:13:37.052609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 01:13:37.053134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 01:13:37.053176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 01:13:37.962918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 01:13:37.963043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 01:13:37.970159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:13:37.970202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 01:13:38.040623       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:13:38.040710       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:13:38.143749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 01:13:38.143844       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 01:13:38.186338       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:13:38.186424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:13:38.203914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:13:38.204098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:13:38.206125       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 01:13:38.206216       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 01:13:38.241908       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 01:13:38.242053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 01:13:38.259677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:13:38.259903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:13:38.271828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:13:38.271918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0717 01:13:39.940172       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 01:18:55.358356       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 01:20:34 multinode-025900 kubelet[3076]: E0717 01:20:34.378563    3076 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.81:8443: connect: connection refused
	Jul 17 01:20:34 multinode-025900 kubelet[3076]: I0717 01:20:34.770069    3076 kubelet_node_status.go:73] "Attempting to register node" node="multinode-025900"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.241446    3076 apiserver.go:52] "Watching apiserver"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.245784    3076 topology_manager.go:215] "Topology Admit Handler" podUID="03f7291c-a0c7-42ce-b786-dc71e57b7792" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g4xjh"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.247043    3076 topology_manager.go:215] "Topology Admit Handler" podUID="cf14e761-3074-4396-9730-f5dd63d79c1c" podNamespace="kube-system" podName="kindnet-97pxj"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.247196    3076 topology_manager.go:215] "Topology Admit Handler" podUID="0993395b-fc50-4564-b36e-83cc2a2113cf" podNamespace="kube-system" podName="kube-proxy-4qbwm"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.247279    3076 topology_manager.go:215] "Topology Admit Handler" podUID="df859607-80ac-43ae-a91c-d10ef995b6dc" podNamespace="kube-system" podName="storage-provisioner"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.247346    3076 topology_manager.go:215] "Topology Admit Handler" podUID="3e227e80-de5e-4cc4-9c10-c4072dfb0ca6" podNamespace="default" podName="busybox-fc5497c4f-mn98f"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.256769    3076 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.284908    3076 kubelet_node_status.go:112] "Node was previously registered" node="multinode-025900"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.285054    3076 kubelet_node_status.go:76] "Successfully registered node" node="multinode-025900"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.286900    3076 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.287829    3076 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.291310    3076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cf14e761-3074-4396-9730-f5dd63d79c1c-cni-cfg\") pod \"kindnet-97pxj\" (UID: \"cf14e761-3074-4396-9730-f5dd63d79c1c\") " pod="kube-system/kindnet-97pxj"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.291702    3076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0993395b-fc50-4564-b36e-83cc2a2113cf-lib-modules\") pod \"kube-proxy-4qbwm\" (UID: \"0993395b-fc50-4564-b36e-83cc2a2113cf\") " pod="kube-system/kube-proxy-4qbwm"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.291853    3076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/df859607-80ac-43ae-a91c-d10ef995b6dc-tmp\") pod \"storage-provisioner\" (UID: \"df859607-80ac-43ae-a91c-d10ef995b6dc\") " pod="kube-system/storage-provisioner"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.292230    3076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf14e761-3074-4396-9730-f5dd63d79c1c-lib-modules\") pod \"kindnet-97pxj\" (UID: \"cf14e761-3074-4396-9730-f5dd63d79c1c\") " pod="kube-system/kindnet-97pxj"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.292693    3076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0993395b-fc50-4564-b36e-83cc2a2113cf-xtables-lock\") pod \"kube-proxy-4qbwm\" (UID: \"0993395b-fc50-4564-b36e-83cc2a2113cf\") " pod="kube-system/kube-proxy-4qbwm"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.293087    3076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf14e761-3074-4396-9730-f5dd63d79c1c-xtables-lock\") pod \"kindnet-97pxj\" (UID: \"cf14e761-3074-4396-9730-f5dd63d79c1c\") " pod="kube-system/kindnet-97pxj"
	Jul 17 01:20:44 multinode-025900 kubelet[3076]: I0717 01:20:44.293061    3076 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 17 01:21:33 multinode-025900 kubelet[3076]: E0717 01:21:33.340338    3076 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:21:33 multinode-025900 kubelet[3076]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:21:33 multinode-025900 kubelet[3076]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:21:33 multinode-025900 kubelet[3076]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:21:33 multinode-025900 kubelet[3076]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:22:18.622358   43323 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19264-3908/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-025900 -n multinode-025900
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-025900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (328.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 stop
E0717 01:22:58.379324   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-025900 stop: exit status 82 (2m0.46027227s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-025900-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-025900 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-025900 status: exit status 3 (18.663641467s)

                                                
                                                
-- stdout --
	multinode-025900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025900-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:24:41.818819   43992 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E0717 01:24:41.818849   43992 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-025900 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-025900 -n multinode-025900
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-025900 logs -n 25: (1.455461175s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m02:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900:/home/docker/cp-test_multinode-025900-m02_multinode-025900.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n multinode-025900 sudo cat                                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /home/docker/cp-test_multinode-025900-m02_multinode-025900.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m02:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03:/home/docker/cp-test_multinode-025900-m02_multinode-025900-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n multinode-025900-m03 sudo cat                                   | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /home/docker/cp-test_multinode-025900-m02_multinode-025900-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp testdata/cp-test.txt                                                | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m03:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3937052499/001/cp-test_multinode-025900-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m03:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900:/home/docker/cp-test_multinode-025900-m03_multinode-025900.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n multinode-025900 sudo cat                                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /home/docker/cp-test_multinode-025900-m03_multinode-025900.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m03:/home/docker/cp-test.txt                       | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m02:/home/docker/cp-test_multinode-025900-m03_multinode-025900-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n multinode-025900-m02 sudo cat                                   | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /home/docker/cp-test_multinode-025900-m03_multinode-025900-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-025900 node stop m03                                                          | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	| node    | multinode-025900 node start                                                             | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-025900                                                                | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC |                     |
	| stop    | -p multinode-025900                                                                     | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC |                     |
	| start   | -p multinode-025900                                                                     | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:18 UTC | 17 Jul 24 01:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-025900                                                                | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC |                     |
	| node    | multinode-025900 node delete                                                            | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC | 17 Jul 24 01:22 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-025900 stop                                                                   | multinode-025900 | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:18:54
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:18:54.327208   42204 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:18:54.327303   42204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:18:54.327308   42204 out.go:304] Setting ErrFile to fd 2...
	I0717 01:18:54.327312   42204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:18:54.327474   42204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:18:54.327989   42204 out.go:298] Setting JSON to false
	I0717 01:18:54.328862   42204 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3676,"bootTime":1721175458,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:18:54.328914   42204 start.go:139] virtualization: kvm guest
	I0717 01:18:54.331143   42204 out.go:177] * [multinode-025900] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:18:54.332472   42204 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:18:54.332492   42204 notify.go:220] Checking for updates...
	I0717 01:18:54.335012   42204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:18:54.336375   42204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:18:54.337593   42204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:18:54.338775   42204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:18:54.340009   42204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:18:54.341624   42204 config.go:182] Loaded profile config "multinode-025900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:18:54.341747   42204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:18:54.342215   42204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:18:54.342269   42204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:18:54.358168   42204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35567
	I0717 01:18:54.358611   42204 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:18:54.359269   42204 main.go:141] libmachine: Using API Version  1
	I0717 01:18:54.359291   42204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:18:54.359780   42204 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:18:54.359962   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:18:54.395275   42204 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:18:54.397064   42204 start.go:297] selected driver: kvm2
	I0717 01:18:54.397080   42204 start.go:901] validating driver "kvm2" against &{Name:multinode-025900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-025900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:18:54.397225   42204 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:18:54.397533   42204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:18:54.397594   42204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:18:54.412184   42204 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:18:54.412838   42204 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:18:54.412895   42204 cni.go:84] Creating CNI manager for ""
	I0717 01:18:54.412906   42204 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 01:18:54.412967   42204 start.go:340] cluster config:
	{Name:multinode-025900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-025900 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:18:54.413102   42204 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:18:54.414878   42204 out.go:177] * Starting "multinode-025900" primary control-plane node in "multinode-025900" cluster
	I0717 01:18:54.416278   42204 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:18:54.416314   42204 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 01:18:54.416323   42204 cache.go:56] Caching tarball of preloaded images
	I0717 01:18:54.416403   42204 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:18:54.416413   42204 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 01:18:54.416530   42204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/config.json ...
	I0717 01:18:54.416715   42204 start.go:360] acquireMachinesLock for multinode-025900: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:18:54.416770   42204 start.go:364] duration metric: took 32.615µs to acquireMachinesLock for "multinode-025900"
	I0717 01:18:54.416794   42204 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:18:54.416804   42204 fix.go:54] fixHost starting: 
	I0717 01:18:54.417082   42204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:18:54.417112   42204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:18:54.431615   42204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43705
	I0717 01:18:54.432024   42204 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:18:54.432606   42204 main.go:141] libmachine: Using API Version  1
	I0717 01:18:54.432629   42204 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:18:54.432938   42204 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:18:54.433118   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:18:54.433263   42204 main.go:141] libmachine: (multinode-025900) Calling .GetState
	I0717 01:18:54.434709   42204 fix.go:112] recreateIfNeeded on multinode-025900: state=Running err=<nil>
	W0717 01:18:54.434745   42204 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:18:54.436640   42204 out.go:177] * Updating the running kvm2 "multinode-025900" VM ...
	I0717 01:18:54.437824   42204 machine.go:94] provisionDockerMachine start ...
	I0717 01:18:54.437840   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:18:54.438035   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:54.440632   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.441126   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:54.441161   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.441287   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:18:54.441441   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.441593   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.441728   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:18:54.441885   42204 main.go:141] libmachine: Using SSH client type: native
	I0717 01:18:54.442145   42204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0717 01:18:54.442159   42204 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:18:54.556368   42204 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025900
	
	I0717 01:18:54.556415   42204 main.go:141] libmachine: (multinode-025900) Calling .GetMachineName
	I0717 01:18:54.556679   42204 buildroot.go:166] provisioning hostname "multinode-025900"
	I0717 01:18:54.556705   42204 main.go:141] libmachine: (multinode-025900) Calling .GetMachineName
	I0717 01:18:54.556906   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:54.559573   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.559965   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:54.559994   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.560132   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:18:54.560344   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.560525   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.560645   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:18:54.560811   42204 main.go:141] libmachine: Using SSH client type: native
	I0717 01:18:54.561045   42204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0717 01:18:54.561064   42204 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-025900 && echo "multinode-025900" | sudo tee /etc/hostname
	I0717 01:18:54.694675   42204 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-025900
	
	I0717 01:18:54.694699   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:54.697386   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.697739   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:54.697779   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.697914   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:18:54.698106   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.698287   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:54.698424   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:18:54.698615   42204 main.go:141] libmachine: Using SSH client type: native
	I0717 01:18:54.698772   42204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0717 01:18:54.698794   42204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-025900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-025900/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-025900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:18:54.811646   42204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:18:54.811687   42204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:18:54.811708   42204 buildroot.go:174] setting up certificates
	I0717 01:18:54.811717   42204 provision.go:84] configureAuth start
	I0717 01:18:54.811729   42204 main.go:141] libmachine: (multinode-025900) Calling .GetMachineName
	I0717 01:18:54.811976   42204 main.go:141] libmachine: (multinode-025900) Calling .GetIP
	I0717 01:18:54.814832   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.815277   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:54.815301   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.815448   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:54.817492   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.817780   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:54.817824   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:54.817930   42204 provision.go:143] copyHostCerts
	I0717 01:18:54.817955   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:18:54.817996   42204 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:18:54.818009   42204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:18:54.818080   42204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:18:54.818168   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:18:54.818191   42204 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:18:54.818198   42204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:18:54.818222   42204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:18:54.818273   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:18:54.818288   42204 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:18:54.818292   42204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:18:54.818312   42204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:18:54.818368   42204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.multinode-025900 san=[127.0.0.1 192.168.39.81 localhost minikube multinode-025900]
	I0717 01:18:55.044205   42204 provision.go:177] copyRemoteCerts
	I0717 01:18:55.044265   42204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:18:55.044295   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:55.047121   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:55.047495   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:55.047528   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:55.047665   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:18:55.047865   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:55.048024   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:18:55.048180   42204 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/multinode-025900/id_rsa Username:docker}
	I0717 01:18:55.134054   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 01:18:55.134116   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 01:18:55.159752   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 01:18:55.159829   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:18:55.184867   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 01:18:55.184930   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:18:55.210888   42204 provision.go:87] duration metric: took 399.158127ms to configureAuth
	I0717 01:18:55.210917   42204 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:18:55.211161   42204 config.go:182] Loaded profile config "multinode-025900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:18:55.211235   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:18:55.213940   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:55.214342   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:18:55.214370   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:18:55.214600   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:18:55.214824   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:55.214976   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:18:55.215092   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:18:55.215215   42204 main.go:141] libmachine: Using SSH client type: native
	I0717 01:18:55.215392   42204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0717 01:18:55.215413   42204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:20:25.991134   42204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:20:25.991167   42204 machine.go:97] duration metric: took 1m31.553331393s to provisionDockerMachine
	I0717 01:20:25.991180   42204 start.go:293] postStartSetup for "multinode-025900" (driver="kvm2")
	I0717 01:20:25.991195   42204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:20:25.991221   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:20:25.991527   42204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:20:25.991554   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:20:25.994613   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:25.995171   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:25.995202   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:25.995381   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:20:25.995581   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:20:25.995750   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:20:25.995860   42204 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/multinode-025900/id_rsa Username:docker}
	I0717 01:20:26.082624   42204 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:20:26.087103   42204 command_runner.go:130] > NAME=Buildroot
	I0717 01:20:26.087122   42204 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 01:20:26.087126   42204 command_runner.go:130] > ID=buildroot
	I0717 01:20:26.087131   42204 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 01:20:26.087136   42204 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 01:20:26.087328   42204 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:20:26.087346   42204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:20:26.087412   42204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:20:26.087481   42204 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:20:26.087490   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /etc/ssl/certs/112592.pem
	I0717 01:20:26.087569   42204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:20:26.098274   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:20:26.123198   42204 start.go:296] duration metric: took 132.001245ms for postStartSetup
	I0717 01:20:26.123236   42204 fix.go:56] duration metric: took 1m31.706433168s for fixHost
	I0717 01:20:26.123256   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:20:26.125986   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.126336   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:26.126375   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.126523   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:20:26.126710   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:20:26.126874   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:20:26.127031   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:20:26.127150   42204 main.go:141] libmachine: Using SSH client type: native
	I0717 01:20:26.127294   42204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0717 01:20:26.127303   42204 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:20:26.239432   42204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721179226.223766527
	
	I0717 01:20:26.239456   42204 fix.go:216] guest clock: 1721179226.223766527
	I0717 01:20:26.239466   42204 fix.go:229] Guest: 2024-07-17 01:20:26.223766527 +0000 UTC Remote: 2024-07-17 01:20:26.123240701 +0000 UTC m=+91.832936562 (delta=100.525826ms)
	I0717 01:20:26.239520   42204 fix.go:200] guest clock delta is within tolerance: 100.525826ms
	I0717 01:20:26.239536   42204 start.go:83] releasing machines lock for "multinode-025900", held for 1m31.822754441s
	I0717 01:20:26.239576   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:20:26.239817   42204 main.go:141] libmachine: (multinode-025900) Calling .GetIP
	I0717 01:20:26.242398   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.242768   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:26.242787   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.242932   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:20:26.243429   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:20:26.243595   42204 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:20:26.243682   42204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:20:26.243718   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:20:26.243833   42204 ssh_runner.go:195] Run: cat /version.json
	I0717 01:20:26.243851   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:20:26.246315   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.246594   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.246677   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:26.246705   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.246811   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:20:26.246967   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:20:26.247100   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:26.247121   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:20:26.247124   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:26.247248   42204 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/multinode-025900/id_rsa Username:docker}
	I0717 01:20:26.247329   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:20:26.247485   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:20:26.247637   42204 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:20:26.247775   42204 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/multinode-025900/id_rsa Username:docker}
	I0717 01:20:26.327482   42204 command_runner.go:130] > {"iso_version": "v1.33.1-1721146474-19264", "kicbase_version": "v0.0.44-1721064868-19249", "minikube_version": "v1.33.1", "commit": "6e0d7ef26437c947028f356d4449a323918e966e"}
	I0717 01:20:26.350274   42204 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 01:20:26.351235   42204 ssh_runner.go:195] Run: systemctl --version
	I0717 01:20:26.357005   42204 command_runner.go:130] > systemd 252 (252)
	I0717 01:20:26.357063   42204 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0717 01:20:26.357268   42204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:20:26.521979   42204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 01:20:26.528454   42204 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 01:20:26.528524   42204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:20:26.528592   42204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:20:26.538169   42204 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 01:20:26.538194   42204 start.go:495] detecting cgroup driver to use...
	I0717 01:20:26.538252   42204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:20:26.554046   42204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:20:26.568329   42204 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:20:26.568380   42204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:20:26.583108   42204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:20:26.597131   42204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:20:26.747821   42204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:20:26.890484   42204 docker.go:233] disabling docker service ...
	I0717 01:20:26.890560   42204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:20:26.908093   42204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:20:26.922120   42204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:20:27.063171   42204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:20:27.203982   42204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:20:27.218501   42204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:20:27.238127   42204 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 01:20:27.238717   42204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:20:27.238780   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.249699   42204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:20:27.249761   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.260861   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.271139   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.281781   42204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:20:27.292674   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.303107   42204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.314308   42204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:20:27.324522   42204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:20:27.333662   42204 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 01:20:27.333751   42204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:20:27.342969   42204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:20:27.484020   42204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:20:30.534979   42204 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.050927224s)
	I0717 01:20:30.535008   42204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:20:30.535062   42204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:20:30.539822   42204 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 01:20:30.539839   42204 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 01:20:30.539846   42204 command_runner.go:130] > Device: 0,22	Inode: 1341        Links: 1
	I0717 01:20:30.539853   42204 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 01:20:30.539858   42204 command_runner.go:130] > Access: 2024-07-17 01:20:30.413962085 +0000
	I0717 01:20:30.539892   42204 command_runner.go:130] > Modify: 2024-07-17 01:20:30.413962085 +0000
	I0717 01:20:30.539902   42204 command_runner.go:130] > Change: 2024-07-17 01:20:30.413962085 +0000
	I0717 01:20:30.539907   42204 command_runner.go:130] >  Birth: -
	I0717 01:20:30.540043   42204 start.go:563] Will wait 60s for crictl version
	I0717 01:20:30.540101   42204 ssh_runner.go:195] Run: which crictl
	I0717 01:20:30.543980   42204 command_runner.go:130] > /usr/bin/crictl
	I0717 01:20:30.544038   42204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:20:30.591683   42204 command_runner.go:130] > Version:  0.1.0
	I0717 01:20:30.591706   42204 command_runner.go:130] > RuntimeName:  cri-o
	I0717 01:20:30.591713   42204 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0717 01:20:30.591720   42204 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 01:20:30.593536   42204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:20:30.593598   42204 ssh_runner.go:195] Run: crio --version
	I0717 01:20:30.621749   42204 command_runner.go:130] > crio version 1.29.1
	I0717 01:20:30.621774   42204 command_runner.go:130] > Version:        1.29.1
	I0717 01:20:30.621787   42204 command_runner.go:130] > GitCommit:      unknown
	I0717 01:20:30.621794   42204 command_runner.go:130] > GitCommitDate:  unknown
	I0717 01:20:30.621800   42204 command_runner.go:130] > GitTreeState:   clean
	I0717 01:20:30.621812   42204 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 01:20:30.621820   42204 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 01:20:30.621825   42204 command_runner.go:130] > Compiler:       gc
	I0717 01:20:30.621832   42204 command_runner.go:130] > Platform:       linux/amd64
	I0717 01:20:30.621837   42204 command_runner.go:130] > Linkmode:       dynamic
	I0717 01:20:30.621845   42204 command_runner.go:130] > BuildTags:      
	I0717 01:20:30.621854   42204 command_runner.go:130] >   containers_image_ostree_stub
	I0717 01:20:30.621861   42204 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 01:20:30.621869   42204 command_runner.go:130] >   btrfs_noversion
	I0717 01:20:30.621877   42204 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 01:20:30.621884   42204 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 01:20:30.621893   42204 command_runner.go:130] >   seccomp
	I0717 01:20:30.621901   42204 command_runner.go:130] > LDFlags:          unknown
	I0717 01:20:30.621910   42204 command_runner.go:130] > SeccompEnabled:   true
	I0717 01:20:30.621918   42204 command_runner.go:130] > AppArmorEnabled:  false
	I0717 01:20:30.622970   42204 ssh_runner.go:195] Run: crio --version
	I0717 01:20:30.651933   42204 command_runner.go:130] > crio version 1.29.1
	I0717 01:20:30.651955   42204 command_runner.go:130] > Version:        1.29.1
	I0717 01:20:30.651960   42204 command_runner.go:130] > GitCommit:      unknown
	I0717 01:20:30.651964   42204 command_runner.go:130] > GitCommitDate:  unknown
	I0717 01:20:30.651968   42204 command_runner.go:130] > GitTreeState:   clean
	I0717 01:20:30.651974   42204 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 01:20:30.651978   42204 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 01:20:30.651982   42204 command_runner.go:130] > Compiler:       gc
	I0717 01:20:30.651986   42204 command_runner.go:130] > Platform:       linux/amd64
	I0717 01:20:30.651990   42204 command_runner.go:130] > Linkmode:       dynamic
	I0717 01:20:30.651994   42204 command_runner.go:130] > BuildTags:      
	I0717 01:20:30.651998   42204 command_runner.go:130] >   containers_image_ostree_stub
	I0717 01:20:30.652002   42204 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 01:20:30.652005   42204 command_runner.go:130] >   btrfs_noversion
	I0717 01:20:30.652009   42204 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 01:20:30.652013   42204 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 01:20:30.652021   42204 command_runner.go:130] >   seccomp
	I0717 01:20:30.652025   42204 command_runner.go:130] > LDFlags:          unknown
	I0717 01:20:30.652037   42204 command_runner.go:130] > SeccompEnabled:   true
	I0717 01:20:30.652042   42204 command_runner.go:130] > AppArmorEnabled:  false
	I0717 01:20:30.655384   42204 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:20:30.656700   42204 main.go:141] libmachine: (multinode-025900) Calling .GetIP
	I0717 01:20:30.659120   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:30.659523   42204 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:20:30.659557   42204 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:20:30.659739   42204 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:20:30.664199   42204 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 01:20:30.664283   42204 kubeadm.go:883] updating cluster {Name:multinode-025900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-025900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:20:30.664411   42204 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:20:30.664449   42204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:20:30.712650   42204 command_runner.go:130] > {
	I0717 01:20:30.712676   42204 command_runner.go:130] >   "images": [
	I0717 01:20:30.712681   42204 command_runner.go:130] >     {
	I0717 01:20:30.712689   42204 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 01:20:30.712694   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.712700   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 01:20:30.712704   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712708   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.712716   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 01:20:30.712725   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 01:20:30.712729   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712733   42204 command_runner.go:130] >       "size": "65908273",
	I0717 01:20:30.712742   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.712746   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.712754   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.712758   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.712762   42204 command_runner.go:130] >     },
	I0717 01:20:30.712764   42204 command_runner.go:130] >     {
	I0717 01:20:30.712770   42204 command_runner.go:130] >       "id": "a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda",
	I0717 01:20:30.712774   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.712779   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-f6ad1f6e"
	I0717 01:20:30.712783   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712787   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.712797   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381",
	I0717 01:20:30.712803   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:d61a2b3d0a49f21f2556f20ae629282e5b4076940972ac659d8cda1cdc6f9a20"
	I0717 01:20:30.712807   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712811   42204 command_runner.go:130] >       "size": "87166004",
	I0717 01:20:30.712817   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.712825   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.712832   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.712836   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.712840   42204 command_runner.go:130] >     },
	I0717 01:20:30.712843   42204 command_runner.go:130] >     {
	I0717 01:20:30.712849   42204 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 01:20:30.712854   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.712859   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 01:20:30.712864   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712868   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.712878   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 01:20:30.712885   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 01:20:30.712890   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712895   42204 command_runner.go:130] >       "size": "1363676",
	I0717 01:20:30.712899   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.712902   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.712906   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.712911   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.712914   42204 command_runner.go:130] >     },
	I0717 01:20:30.712917   42204 command_runner.go:130] >     {
	I0717 01:20:30.712923   42204 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 01:20:30.712929   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.712934   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 01:20:30.712939   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712943   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.712952   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 01:20:30.712964   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 01:20:30.712970   42204 command_runner.go:130] >       ],
	I0717 01:20:30.712974   42204 command_runner.go:130] >       "size": "31470524",
	I0717 01:20:30.712980   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.712985   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.712997   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713001   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713007   42204 command_runner.go:130] >     },
	I0717 01:20:30.713018   42204 command_runner.go:130] >     {
	I0717 01:20:30.713025   42204 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 01:20:30.713031   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713037   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 01:20:30.713043   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713047   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713055   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 01:20:30.713065   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 01:20:30.713070   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713074   42204 command_runner.go:130] >       "size": "61245718",
	I0717 01:20:30.713078   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.713084   42204 command_runner.go:130] >       "username": "nonroot",
	I0717 01:20:30.713088   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713094   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713097   42204 command_runner.go:130] >     },
	I0717 01:20:30.713100   42204 command_runner.go:130] >     {
	I0717 01:20:30.713106   42204 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 01:20:30.713111   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713115   42204 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 01:20:30.713121   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713125   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713134   42204 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 01:20:30.713143   42204 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 01:20:30.713149   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713153   42204 command_runner.go:130] >       "size": "150779692",
	I0717 01:20:30.713159   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.713163   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.713169   42204 command_runner.go:130] >       },
	I0717 01:20:30.713173   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713179   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713183   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713188   42204 command_runner.go:130] >     },
	I0717 01:20:30.713191   42204 command_runner.go:130] >     {
	I0717 01:20:30.713198   42204 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 01:20:30.713204   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713209   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 01:20:30.713215   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713219   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713229   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 01:20:30.713238   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 01:20:30.713244   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713248   42204 command_runner.go:130] >       "size": "117609954",
	I0717 01:20:30.713253   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.713257   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.713263   42204 command_runner.go:130] >       },
	I0717 01:20:30.713267   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713271   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713275   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713279   42204 command_runner.go:130] >     },
	I0717 01:20:30.713284   42204 command_runner.go:130] >     {
	I0717 01:20:30.713290   42204 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 01:20:30.713296   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713301   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 01:20:30.713307   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713312   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713326   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 01:20:30.713336   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 01:20:30.713342   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713346   42204 command_runner.go:130] >       "size": "112194888",
	I0717 01:20:30.713352   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.713356   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.713362   42204 command_runner.go:130] >       },
	I0717 01:20:30.713366   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713370   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713373   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713377   42204 command_runner.go:130] >     },
	I0717 01:20:30.713379   42204 command_runner.go:130] >     {
	I0717 01:20:30.713385   42204 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 01:20:30.713389   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713393   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 01:20:30.713397   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713400   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713419   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 01:20:30.713427   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 01:20:30.713431   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713438   42204 command_runner.go:130] >       "size": "85953433",
	I0717 01:20:30.713442   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.713448   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713452   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713457   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713461   42204 command_runner.go:130] >     },
	I0717 01:20:30.713466   42204 command_runner.go:130] >     {
	I0717 01:20:30.713472   42204 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 01:20:30.713478   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713484   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 01:20:30.713489   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713493   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713502   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 01:20:30.713511   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 01:20:30.713514   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713521   42204 command_runner.go:130] >       "size": "63051080",
	I0717 01:20:30.713524   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.713530   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.713533   42204 command_runner.go:130] >       },
	I0717 01:20:30.713540   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713544   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713550   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.713553   42204 command_runner.go:130] >     },
	I0717 01:20:30.713559   42204 command_runner.go:130] >     {
	I0717 01:20:30.713565   42204 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 01:20:30.713571   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.713576   42204 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 01:20:30.713581   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713585   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.713593   42204 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 01:20:30.713600   42204 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 01:20:30.713605   42204 command_runner.go:130] >       ],
	I0717 01:20:30.713610   42204 command_runner.go:130] >       "size": "750414",
	I0717 01:20:30.713615   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.713619   42204 command_runner.go:130] >         "value": "65535"
	I0717 01:20:30.713625   42204 command_runner.go:130] >       },
	I0717 01:20:30.713629   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.713635   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.713639   42204 command_runner.go:130] >       "pinned": true
	I0717 01:20:30.713644   42204 command_runner.go:130] >     }
	I0717 01:20:30.713648   42204 command_runner.go:130] >   ]
	I0717 01:20:30.713653   42204 command_runner.go:130] > }
	I0717 01:20:30.713833   42204 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:20:30.713844   42204 crio.go:433] Images already preloaded, skipping extraction
	I0717 01:20:30.713906   42204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:20:30.748555   42204 command_runner.go:130] > {
	I0717 01:20:30.748572   42204 command_runner.go:130] >   "images": [
	I0717 01:20:30.748576   42204 command_runner.go:130] >     {
	I0717 01:20:30.748584   42204 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 01:20:30.748588   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.748594   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 01:20:30.748597   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748601   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.748616   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 01:20:30.748626   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 01:20:30.748634   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748641   42204 command_runner.go:130] >       "size": "65908273",
	I0717 01:20:30.748651   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.748657   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.748664   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.748671   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.748675   42204 command_runner.go:130] >     },
	I0717 01:20:30.748683   42204 command_runner.go:130] >     {
	I0717 01:20:30.748692   42204 command_runner.go:130] >       "id": "a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda",
	I0717 01:20:30.748699   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.748708   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-f6ad1f6e"
	I0717 01:20:30.748719   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748725   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.748734   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381",
	I0717 01:20:30.748741   42204 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:d61a2b3d0a49f21f2556f20ae629282e5b4076940972ac659d8cda1cdc6f9a20"
	I0717 01:20:30.748746   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748750   42204 command_runner.go:130] >       "size": "87166004",
	I0717 01:20:30.748754   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.748768   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.748774   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.748778   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.748783   42204 command_runner.go:130] >     },
	I0717 01:20:30.748788   42204 command_runner.go:130] >     {
	I0717 01:20:30.748794   42204 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 01:20:30.748799   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.748804   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 01:20:30.748810   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748814   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.748821   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 01:20:30.748830   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 01:20:30.748835   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748839   42204 command_runner.go:130] >       "size": "1363676",
	I0717 01:20:30.748845   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.748849   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.748868   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.748874   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.748878   42204 command_runner.go:130] >     },
	I0717 01:20:30.748883   42204 command_runner.go:130] >     {
	I0717 01:20:30.748889   42204 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 01:20:30.748895   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.748900   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 01:20:30.748905   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748909   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.748918   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 01:20:30.748930   42204 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 01:20:30.748936   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748940   42204 command_runner.go:130] >       "size": "31470524",
	I0717 01:20:30.748945   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.748951   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.748955   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.748961   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.748964   42204 command_runner.go:130] >     },
	I0717 01:20:30.748970   42204 command_runner.go:130] >     {
	I0717 01:20:30.748976   42204 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 01:20:30.748982   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.748987   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 01:20:30.748992   42204 command_runner.go:130] >       ],
	I0717 01:20:30.748996   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749003   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 01:20:30.749011   42204 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 01:20:30.749015   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749022   42204 command_runner.go:130] >       "size": "61245718",
	I0717 01:20:30.749026   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.749030   42204 command_runner.go:130] >       "username": "nonroot",
	I0717 01:20:30.749034   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749041   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749044   42204 command_runner.go:130] >     },
	I0717 01:20:30.749048   42204 command_runner.go:130] >     {
	I0717 01:20:30.749054   42204 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 01:20:30.749058   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749063   42204 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 01:20:30.749068   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749072   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749081   42204 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 01:20:30.749090   42204 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 01:20:30.749096   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749101   42204 command_runner.go:130] >       "size": "150779692",
	I0717 01:20:30.749106   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.749111   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.749119   42204 command_runner.go:130] >       },
	I0717 01:20:30.749123   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749129   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749133   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749139   42204 command_runner.go:130] >     },
	I0717 01:20:30.749143   42204 command_runner.go:130] >     {
	I0717 01:20:30.749151   42204 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 01:20:30.749157   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749162   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 01:20:30.749167   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749171   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749180   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 01:20:30.749189   42204 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 01:20:30.749195   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749199   42204 command_runner.go:130] >       "size": "117609954",
	I0717 01:20:30.749206   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.749210   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.749216   42204 command_runner.go:130] >       },
	I0717 01:20:30.749220   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749225   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749229   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749235   42204 command_runner.go:130] >     },
	I0717 01:20:30.749238   42204 command_runner.go:130] >     {
	I0717 01:20:30.749246   42204 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 01:20:30.749251   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749256   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 01:20:30.749267   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749273   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749289   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 01:20:30.749299   42204 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 01:20:30.749302   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749306   42204 command_runner.go:130] >       "size": "112194888",
	I0717 01:20:30.749312   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.749316   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.749322   42204 command_runner.go:130] >       },
	I0717 01:20:30.749326   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749332   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749336   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749342   42204 command_runner.go:130] >     },
	I0717 01:20:30.749345   42204 command_runner.go:130] >     {
	I0717 01:20:30.749356   42204 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 01:20:30.749360   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749368   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 01:20:30.749371   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749377   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749385   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 01:20:30.749396   42204 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 01:20:30.749402   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749406   42204 command_runner.go:130] >       "size": "85953433",
	I0717 01:20:30.749412   42204 command_runner.go:130] >       "uid": null,
	I0717 01:20:30.749416   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749422   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749426   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749432   42204 command_runner.go:130] >     },
	I0717 01:20:30.749435   42204 command_runner.go:130] >     {
	I0717 01:20:30.749442   42204 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 01:20:30.749448   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749453   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 01:20:30.749458   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749462   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749472   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 01:20:30.749481   42204 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 01:20:30.749487   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749491   42204 command_runner.go:130] >       "size": "63051080",
	I0717 01:20:30.749497   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.749501   42204 command_runner.go:130] >         "value": "0"
	I0717 01:20:30.749506   42204 command_runner.go:130] >       },
	I0717 01:20:30.749510   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749515   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749520   42204 command_runner.go:130] >       "pinned": false
	I0717 01:20:30.749526   42204 command_runner.go:130] >     },
	I0717 01:20:30.749530   42204 command_runner.go:130] >     {
	I0717 01:20:30.749538   42204 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 01:20:30.749544   42204 command_runner.go:130] >       "repoTags": [
	I0717 01:20:30.749549   42204 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 01:20:30.749554   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749559   42204 command_runner.go:130] >       "repoDigests": [
	I0717 01:20:30.749568   42204 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 01:20:30.749576   42204 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 01:20:30.749582   42204 command_runner.go:130] >       ],
	I0717 01:20:30.749586   42204 command_runner.go:130] >       "size": "750414",
	I0717 01:20:30.749592   42204 command_runner.go:130] >       "uid": {
	I0717 01:20:30.749596   42204 command_runner.go:130] >         "value": "65535"
	I0717 01:20:30.749602   42204 command_runner.go:130] >       },
	I0717 01:20:30.749606   42204 command_runner.go:130] >       "username": "",
	I0717 01:20:30.749612   42204 command_runner.go:130] >       "spec": null,
	I0717 01:20:30.749616   42204 command_runner.go:130] >       "pinned": true
	I0717 01:20:30.749622   42204 command_runner.go:130] >     }
	I0717 01:20:30.749625   42204 command_runner.go:130] >   ]
	I0717 01:20:30.749630   42204 command_runner.go:130] > }
	I0717 01:20:30.749755   42204 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:20:30.749772   42204 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:20:30.749780   42204 kubeadm.go:934] updating node { 192.168.39.81 8443 v1.30.2 crio true true} ...
	I0717 01:20:30.749892   42204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-025900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-025900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:20:30.749959   42204 ssh_runner.go:195] Run: crio config
	I0717 01:20:30.782286   42204 command_runner.go:130] ! time="2024-07-17 01:20:30.766850106Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0717 01:20:30.788430   42204 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 01:20:30.798705   42204 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 01:20:30.798730   42204 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 01:20:30.798742   42204 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 01:20:30.798747   42204 command_runner.go:130] > #
	I0717 01:20:30.798754   42204 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 01:20:30.798760   42204 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 01:20:30.798766   42204 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 01:20:30.798773   42204 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 01:20:30.798777   42204 command_runner.go:130] > # reload'.
	I0717 01:20:30.798783   42204 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 01:20:30.798789   42204 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 01:20:30.798798   42204 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 01:20:30.798803   42204 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 01:20:30.798806   42204 command_runner.go:130] > [crio]
	I0717 01:20:30.798815   42204 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 01:20:30.798822   42204 command_runner.go:130] > # containers images, in this directory.
	I0717 01:20:30.798827   42204 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 01:20:30.798835   42204 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 01:20:30.798839   42204 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 01:20:30.798847   42204 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0717 01:20:30.798851   42204 command_runner.go:130] > # imagestore = ""
	I0717 01:20:30.798858   42204 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 01:20:30.798864   42204 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 01:20:30.798871   42204 command_runner.go:130] > storage_driver = "overlay"
	I0717 01:20:30.798876   42204 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 01:20:30.798884   42204 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 01:20:30.798898   42204 command_runner.go:130] > storage_option = [
	I0717 01:20:30.798904   42204 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 01:20:30.798907   42204 command_runner.go:130] > ]
	I0717 01:20:30.798916   42204 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 01:20:30.798922   42204 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 01:20:30.798928   42204 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 01:20:30.798933   42204 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 01:20:30.798941   42204 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 01:20:30.798948   42204 command_runner.go:130] > # always happen on a node reboot
	I0717 01:20:30.798952   42204 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 01:20:30.798963   42204 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 01:20:30.798970   42204 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 01:20:30.798975   42204 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 01:20:30.798981   42204 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0717 01:20:30.798988   42204 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 01:20:30.799001   42204 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 01:20:30.799008   42204 command_runner.go:130] > # internal_wipe = true
	I0717 01:20:30.799015   42204 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0717 01:20:30.799022   42204 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0717 01:20:30.799027   42204 command_runner.go:130] > # internal_repair = false
	I0717 01:20:30.799034   42204 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 01:20:30.799045   42204 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 01:20:30.799053   42204 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 01:20:30.799058   42204 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 01:20:30.799065   42204 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 01:20:30.799069   42204 command_runner.go:130] > [crio.api]
	I0717 01:20:30.799075   42204 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 01:20:30.799084   42204 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 01:20:30.799091   42204 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 01:20:30.799095   42204 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 01:20:30.799104   42204 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 01:20:30.799111   42204 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 01:20:30.799117   42204 command_runner.go:130] > # stream_port = "0"
	I0717 01:20:30.799122   42204 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 01:20:30.799128   42204 command_runner.go:130] > # stream_enable_tls = false
	I0717 01:20:30.799134   42204 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 01:20:30.799140   42204 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 01:20:30.799149   42204 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 01:20:30.799157   42204 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 01:20:30.799161   42204 command_runner.go:130] > # minutes.
	I0717 01:20:30.799165   42204 command_runner.go:130] > # stream_tls_cert = ""
	I0717 01:20:30.799171   42204 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 01:20:30.799179   42204 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 01:20:30.799186   42204 command_runner.go:130] > # stream_tls_key = ""
	I0717 01:20:30.799191   42204 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 01:20:30.799199   42204 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 01:20:30.799213   42204 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 01:20:30.799220   42204 command_runner.go:130] > # stream_tls_ca = ""
	I0717 01:20:30.799227   42204 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 01:20:30.799233   42204 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 01:20:30.799240   42204 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 01:20:30.799247   42204 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 01:20:30.799253   42204 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 01:20:30.799260   42204 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 01:20:30.799264   42204 command_runner.go:130] > [crio.runtime]
	I0717 01:20:30.799272   42204 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 01:20:30.799279   42204 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 01:20:30.799283   42204 command_runner.go:130] > # "nofile=1024:2048"
	I0717 01:20:30.799289   42204 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 01:20:30.799295   42204 command_runner.go:130] > # default_ulimits = [
	I0717 01:20:30.799299   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799307   42204 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 01:20:30.799313   42204 command_runner.go:130] > # no_pivot = false
	I0717 01:20:30.799318   42204 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 01:20:30.799324   42204 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 01:20:30.799330   42204 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 01:20:30.799336   42204 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 01:20:30.799343   42204 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 01:20:30.799350   42204 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 01:20:30.799357   42204 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 01:20:30.799361   42204 command_runner.go:130] > # Cgroup setting for conmon
	I0717 01:20:30.799370   42204 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 01:20:30.799377   42204 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 01:20:30.799382   42204 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 01:20:30.799389   42204 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 01:20:30.799397   42204 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 01:20:30.799403   42204 command_runner.go:130] > conmon_env = [
	I0717 01:20:30.799408   42204 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 01:20:30.799411   42204 command_runner.go:130] > ]
	I0717 01:20:30.799418   42204 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 01:20:30.799423   42204 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 01:20:30.799431   42204 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 01:20:30.799435   42204 command_runner.go:130] > # default_env = [
	I0717 01:20:30.799440   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799445   42204 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 01:20:30.799454   42204 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0717 01:20:30.799460   42204 command_runner.go:130] > # selinux = false
	I0717 01:20:30.799466   42204 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 01:20:30.799473   42204 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 01:20:30.799481   42204 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 01:20:30.799485   42204 command_runner.go:130] > # seccomp_profile = ""
	I0717 01:20:30.799491   42204 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 01:20:30.799496   42204 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 01:20:30.799504   42204 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 01:20:30.799509   42204 command_runner.go:130] > # which might increase security.
	I0717 01:20:30.799517   42204 command_runner.go:130] > # This option is currently deprecated,
	I0717 01:20:30.799525   42204 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0717 01:20:30.799531   42204 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 01:20:30.799537   42204 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 01:20:30.799546   42204 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 01:20:30.799554   42204 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 01:20:30.799563   42204 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 01:20:30.799570   42204 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:20:30.799575   42204 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 01:20:30.799582   42204 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 01:20:30.799589   42204 command_runner.go:130] > # the cgroup blockio controller.
	I0717 01:20:30.799593   42204 command_runner.go:130] > # blockio_config_file = ""
	I0717 01:20:30.799601   42204 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0717 01:20:30.799606   42204 command_runner.go:130] > # blockio parameters.
	I0717 01:20:30.799610   42204 command_runner.go:130] > # blockio_reload = false
	I0717 01:20:30.799618   42204 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 01:20:30.799624   42204 command_runner.go:130] > # irqbalance daemon.
	I0717 01:20:30.799629   42204 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 01:20:30.799642   42204 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0717 01:20:30.799651   42204 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0717 01:20:30.799657   42204 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0717 01:20:30.799665   42204 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0717 01:20:30.799672   42204 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 01:20:30.799679   42204 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:20:30.799684   42204 command_runner.go:130] > # rdt_config_file = ""
	I0717 01:20:30.799691   42204 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 01:20:30.799695   42204 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 01:20:30.799716   42204 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 01:20:30.799724   42204 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 01:20:30.799730   42204 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 01:20:30.799736   42204 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 01:20:30.799741   42204 command_runner.go:130] > # will be added.
	I0717 01:20:30.799747   42204 command_runner.go:130] > # default_capabilities = [
	I0717 01:20:30.799753   42204 command_runner.go:130] > # 	"CHOWN",
	I0717 01:20:30.799757   42204 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 01:20:30.799763   42204 command_runner.go:130] > # 	"FSETID",
	I0717 01:20:30.799767   42204 command_runner.go:130] > # 	"FOWNER",
	I0717 01:20:30.799773   42204 command_runner.go:130] > # 	"SETGID",
	I0717 01:20:30.799777   42204 command_runner.go:130] > # 	"SETUID",
	I0717 01:20:30.799784   42204 command_runner.go:130] > # 	"SETPCAP",
	I0717 01:20:30.799788   42204 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 01:20:30.799793   42204 command_runner.go:130] > # 	"KILL",
	I0717 01:20:30.799797   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799806   42204 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 01:20:30.799814   42204 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 01:20:30.799819   42204 command_runner.go:130] > # add_inheritable_capabilities = false
	I0717 01:20:30.799824   42204 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 01:20:30.799832   42204 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 01:20:30.799838   42204 command_runner.go:130] > default_sysctls = [
	I0717 01:20:30.799843   42204 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0717 01:20:30.799848   42204 command_runner.go:130] > ]
	I0717 01:20:30.799853   42204 command_runner.go:130] > # List of devices on the host that a
	I0717 01:20:30.799861   42204 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 01:20:30.799866   42204 command_runner.go:130] > # allowed_devices = [
	I0717 01:20:30.799870   42204 command_runner.go:130] > # 	"/dev/fuse",
	I0717 01:20:30.799875   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799880   42204 command_runner.go:130] > # List of additional devices. specified as
	I0717 01:20:30.799889   42204 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 01:20:30.799895   42204 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 01:20:30.799903   42204 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 01:20:30.799909   42204 command_runner.go:130] > # additional_devices = [
	I0717 01:20:30.799912   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799919   42204 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 01:20:30.799923   42204 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 01:20:30.799928   42204 command_runner.go:130] > # 	"/etc/cdi",
	I0717 01:20:30.799932   42204 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 01:20:30.799937   42204 command_runner.go:130] > # ]
	I0717 01:20:30.799943   42204 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 01:20:30.799951   42204 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 01:20:30.799958   42204 command_runner.go:130] > # Defaults to false.
	I0717 01:20:30.799963   42204 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 01:20:30.799971   42204 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 01:20:30.799979   42204 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 01:20:30.799984   42204 command_runner.go:130] > # hooks_dir = [
	I0717 01:20:30.799990   42204 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 01:20:30.799998   42204 command_runner.go:130] > # ]
	I0717 01:20:30.800005   42204 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 01:20:30.800011   42204 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 01:20:30.800019   42204 command_runner.go:130] > # its default mounts from the following two files:
	I0717 01:20:30.800022   42204 command_runner.go:130] > #
	I0717 01:20:30.800027   42204 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 01:20:30.800034   42204 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 01:20:30.800041   42204 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 01:20:30.800044   42204 command_runner.go:130] > #
	I0717 01:20:30.800050   42204 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 01:20:30.800058   42204 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 01:20:30.800064   42204 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 01:20:30.800071   42204 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 01:20:30.800074   42204 command_runner.go:130] > #
	I0717 01:20:30.800078   42204 command_runner.go:130] > # default_mounts_file = ""
	I0717 01:20:30.800085   42204 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 01:20:30.800091   42204 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 01:20:30.800097   42204 command_runner.go:130] > pids_limit = 1024
	I0717 01:20:30.800103   42204 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 01:20:30.800110   42204 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 01:20:30.800118   42204 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 01:20:30.800127   42204 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 01:20:30.800132   42204 command_runner.go:130] > # log_size_max = -1
	I0717 01:20:30.800139   42204 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0717 01:20:30.800148   42204 command_runner.go:130] > # log_to_journald = false
	I0717 01:20:30.800156   42204 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 01:20:30.800161   42204 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 01:20:30.800168   42204 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 01:20:30.800173   42204 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 01:20:30.800180   42204 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 01:20:30.800186   42204 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 01:20:30.800191   42204 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 01:20:30.800197   42204 command_runner.go:130] > # read_only = false
	I0717 01:20:30.800203   42204 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 01:20:30.800211   42204 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 01:20:30.800218   42204 command_runner.go:130] > # live configuration reload.
	I0717 01:20:30.800222   42204 command_runner.go:130] > # log_level = "info"
	I0717 01:20:30.800229   42204 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 01:20:30.800234   42204 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:20:30.800240   42204 command_runner.go:130] > # log_filter = ""
	I0717 01:20:30.800245   42204 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 01:20:30.800254   42204 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 01:20:30.800260   42204 command_runner.go:130] > # separated by comma.
	I0717 01:20:30.800268   42204 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:20:30.800274   42204 command_runner.go:130] > # uid_mappings = ""
	I0717 01:20:30.800280   42204 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 01:20:30.800287   42204 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 01:20:30.800293   42204 command_runner.go:130] > # separated by comma.
	I0717 01:20:30.800300   42204 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:20:30.800306   42204 command_runner.go:130] > # gid_mappings = ""
	I0717 01:20:30.800312   42204 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 01:20:30.800320   42204 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 01:20:30.800326   42204 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 01:20:30.800333   42204 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:20:30.800339   42204 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 01:20:30.800345   42204 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 01:20:30.800353   42204 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 01:20:30.800362   42204 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 01:20:30.800371   42204 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 01:20:30.800379   42204 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 01:20:30.800387   42204 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 01:20:30.800393   42204 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 01:20:30.800401   42204 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 01:20:30.800407   42204 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 01:20:30.800412   42204 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 01:20:30.800418   42204 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 01:20:30.800424   42204 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 01:20:30.800429   42204 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 01:20:30.800435   42204 command_runner.go:130] > drop_infra_ctr = false
	I0717 01:20:30.800440   42204 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 01:20:30.800448   42204 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 01:20:30.800456   42204 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 01:20:30.800462   42204 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 01:20:30.800469   42204 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0717 01:20:30.800477   42204 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0717 01:20:30.800484   42204 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0717 01:20:30.800492   42204 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0717 01:20:30.800495   42204 command_runner.go:130] > # shared_cpuset = ""
	I0717 01:20:30.800501   42204 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 01:20:30.800508   42204 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 01:20:30.800512   42204 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 01:20:30.800521   42204 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 01:20:30.800525   42204 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 01:20:30.800532   42204 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0717 01:20:30.800538   42204 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0717 01:20:30.800545   42204 command_runner.go:130] > # enable_criu_support = false
	I0717 01:20:30.800550   42204 command_runner.go:130] > # Enable/disable the generation of the container,
	I0717 01:20:30.800558   42204 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0717 01:20:30.800562   42204 command_runner.go:130] > # enable_pod_events = false
	I0717 01:20:30.800568   42204 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 01:20:30.800574   42204 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 01:20:30.800581   42204 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0717 01:20:30.800585   42204 command_runner.go:130] > # default_runtime = "runc"
	I0717 01:20:30.800590   42204 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 01:20:30.800598   42204 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 01:20:30.800608   42204 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0717 01:20:30.800622   42204 command_runner.go:130] > # creation as a file is not desired either.
	I0717 01:20:30.800631   42204 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 01:20:30.800638   42204 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 01:20:30.800643   42204 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 01:20:30.800648   42204 command_runner.go:130] > # ]
	I0717 01:20:30.800654   42204 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 01:20:30.800660   42204 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 01:20:30.800667   42204 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0717 01:20:30.800673   42204 command_runner.go:130] > # Each entry in the table should follow the format:
	I0717 01:20:30.800678   42204 command_runner.go:130] > #
	I0717 01:20:30.800682   42204 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0717 01:20:30.800689   42204 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0717 01:20:30.800707   42204 command_runner.go:130] > # runtime_type = "oci"
	I0717 01:20:30.800713   42204 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0717 01:20:30.800718   42204 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0717 01:20:30.800724   42204 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0717 01:20:30.800729   42204 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0717 01:20:30.800733   42204 command_runner.go:130] > # monitor_env = []
	I0717 01:20:30.800737   42204 command_runner.go:130] > # privileged_without_host_devices = false
	I0717 01:20:30.800741   42204 command_runner.go:130] > # allowed_annotations = []
	I0717 01:20:30.800748   42204 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0717 01:20:30.800753   42204 command_runner.go:130] > # Where:
	I0717 01:20:30.800758   42204 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0717 01:20:30.800766   42204 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0717 01:20:30.800772   42204 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 01:20:30.800779   42204 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 01:20:30.800783   42204 command_runner.go:130] > #   in $PATH.
	I0717 01:20:30.800789   42204 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0717 01:20:30.800796   42204 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 01:20:30.800802   42204 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0717 01:20:30.800807   42204 command_runner.go:130] > #   state.
	I0717 01:20:30.800813   42204 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 01:20:30.800821   42204 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 01:20:30.800827   42204 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 01:20:30.800835   42204 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 01:20:30.800840   42204 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 01:20:30.800849   42204 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 01:20:30.800855   42204 command_runner.go:130] > #   The currently recognized values are:
	I0717 01:20:30.800864   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 01:20:30.800870   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 01:20:30.800878   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 01:20:30.800884   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 01:20:30.800893   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 01:20:30.800901   42204 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 01:20:30.800908   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0717 01:20:30.800916   42204 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0717 01:20:30.800922   42204 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 01:20:30.800931   42204 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0717 01:20:30.800935   42204 command_runner.go:130] > #   deprecated option "conmon".
	I0717 01:20:30.800942   42204 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0717 01:20:30.800949   42204 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0717 01:20:30.800956   42204 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0717 01:20:30.800962   42204 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 01:20:30.800969   42204 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0717 01:20:30.800975   42204 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0717 01:20:30.800982   42204 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0717 01:20:30.800989   42204 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0717 01:20:30.800992   42204 command_runner.go:130] > #
	I0717 01:20:30.800999   42204 command_runner.go:130] > # Using the seccomp notifier feature:
	I0717 01:20:30.801002   42204 command_runner.go:130] > #
	I0717 01:20:30.801007   42204 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0717 01:20:30.801014   42204 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0717 01:20:30.801017   42204 command_runner.go:130] > #
	I0717 01:20:30.801023   42204 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0717 01:20:30.801030   42204 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0717 01:20:30.801033   42204 command_runner.go:130] > #
	I0717 01:20:30.801039   42204 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0717 01:20:30.801043   42204 command_runner.go:130] > # feature.
	I0717 01:20:30.801046   42204 command_runner.go:130] > #
	I0717 01:20:30.801054   42204 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0717 01:20:30.801060   42204 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0717 01:20:30.801066   42204 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0717 01:20:30.801074   42204 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0717 01:20:30.801081   42204 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0717 01:20:30.801084   42204 command_runner.go:130] > #
	I0717 01:20:30.801090   42204 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0717 01:20:30.801097   42204 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0717 01:20:30.801100   42204 command_runner.go:130] > #
	I0717 01:20:30.801106   42204 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0717 01:20:30.801112   42204 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0717 01:20:30.801115   42204 command_runner.go:130] > #
	I0717 01:20:30.801121   42204 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0717 01:20:30.801130   42204 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0717 01:20:30.801134   42204 command_runner.go:130] > # limitation.
	I0717 01:20:30.801141   42204 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 01:20:30.801145   42204 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 01:20:30.801152   42204 command_runner.go:130] > runtime_type = "oci"
	I0717 01:20:30.801156   42204 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 01:20:30.801160   42204 command_runner.go:130] > runtime_config_path = ""
	I0717 01:20:30.801167   42204 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0717 01:20:30.801171   42204 command_runner.go:130] > monitor_cgroup = "pod"
	I0717 01:20:30.801174   42204 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 01:20:30.801178   42204 command_runner.go:130] > monitor_env = [
	I0717 01:20:30.801184   42204 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 01:20:30.801189   42204 command_runner.go:130] > ]
	I0717 01:20:30.801194   42204 command_runner.go:130] > privileged_without_host_devices = false
	I0717 01:20:30.801200   42204 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 01:20:30.801207   42204 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 01:20:30.801213   42204 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 01:20:30.801222   42204 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 01:20:30.801229   42204 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 01:20:30.801236   42204 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 01:20:30.801245   42204 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 01:20:30.801254   42204 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 01:20:30.801260   42204 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 01:20:30.801266   42204 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 01:20:30.801269   42204 command_runner.go:130] > # Example:
	I0717 01:20:30.801273   42204 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 01:20:30.801278   42204 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 01:20:30.801282   42204 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 01:20:30.801289   42204 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 01:20:30.801292   42204 command_runner.go:130] > # cpuset = 0
	I0717 01:20:30.801296   42204 command_runner.go:130] > # cpushares = "0-1"
	I0717 01:20:30.801299   42204 command_runner.go:130] > # Where:
	I0717 01:20:30.801303   42204 command_runner.go:130] > # The workload name is workload-type.
	I0717 01:20:30.801309   42204 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 01:20:30.801314   42204 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 01:20:30.801319   42204 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 01:20:30.801329   42204 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 01:20:30.801335   42204 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 01:20:30.801340   42204 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0717 01:20:30.801348   42204 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0717 01:20:30.801353   42204 command_runner.go:130] > # Default value is set to true
	I0717 01:20:30.801359   42204 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0717 01:20:30.801365   42204 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0717 01:20:30.801370   42204 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0717 01:20:30.801374   42204 command_runner.go:130] > # Default value is set to 'false'
	I0717 01:20:30.801379   42204 command_runner.go:130] > # disable_hostport_mapping = false
	I0717 01:20:30.801384   42204 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 01:20:30.801389   42204 command_runner.go:130] > #
	I0717 01:20:30.801395   42204 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 01:20:30.801403   42204 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 01:20:30.801409   42204 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 01:20:30.801417   42204 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 01:20:30.801422   42204 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 01:20:30.801428   42204 command_runner.go:130] > [crio.image]
	I0717 01:20:30.801434   42204 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 01:20:30.801440   42204 command_runner.go:130] > # default_transport = "docker://"
	I0717 01:20:30.801446   42204 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 01:20:30.801454   42204 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 01:20:30.801461   42204 command_runner.go:130] > # global_auth_file = ""
	I0717 01:20:30.801466   42204 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 01:20:30.801473   42204 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:20:30.801477   42204 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0717 01:20:30.801485   42204 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 01:20:30.801493   42204 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 01:20:30.801499   42204 command_runner.go:130] > # This option supports live configuration reload.
	I0717 01:20:30.801505   42204 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 01:20:30.801513   42204 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 01:20:30.801520   42204 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 01:20:30.801526   42204 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 01:20:30.801533   42204 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 01:20:30.801540   42204 command_runner.go:130] > # pause_command = "/pause"
	I0717 01:20:30.801546   42204 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0717 01:20:30.801554   42204 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0717 01:20:30.801562   42204 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0717 01:20:30.801571   42204 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0717 01:20:30.801579   42204 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0717 01:20:30.801585   42204 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0717 01:20:30.801591   42204 command_runner.go:130] > # pinned_images = [
	I0717 01:20:30.801594   42204 command_runner.go:130] > # ]
	I0717 01:20:30.801602   42204 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 01:20:30.801609   42204 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 01:20:30.801617   42204 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 01:20:30.801626   42204 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 01:20:30.801633   42204 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 01:20:30.801637   42204 command_runner.go:130] > # signature_policy = ""
	I0717 01:20:30.801644   42204 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0717 01:20:30.801651   42204 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0717 01:20:30.801659   42204 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0717 01:20:30.801665   42204 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0717 01:20:30.801671   42204 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0717 01:20:30.801677   42204 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0717 01:20:30.801683   42204 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 01:20:30.801691   42204 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 01:20:30.801696   42204 command_runner.go:130] > # changing them here.
	I0717 01:20:30.801700   42204 command_runner.go:130] > # insecure_registries = [
	I0717 01:20:30.801705   42204 command_runner.go:130] > # ]
	I0717 01:20:30.801710   42204 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 01:20:30.801717   42204 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 01:20:30.801721   42204 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 01:20:30.801728   42204 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 01:20:30.801732   42204 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 01:20:30.801743   42204 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 01:20:30.801749   42204 command_runner.go:130] > # CNI plugins.
	I0717 01:20:30.801753   42204 command_runner.go:130] > [crio.network]
	I0717 01:20:30.801759   42204 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 01:20:30.801767   42204 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 01:20:30.801773   42204 command_runner.go:130] > # cni_default_network = ""
	I0717 01:20:30.801779   42204 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 01:20:30.801785   42204 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 01:20:30.801791   42204 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 01:20:30.801796   42204 command_runner.go:130] > # plugin_dirs = [
	I0717 01:20:30.801800   42204 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 01:20:30.801805   42204 command_runner.go:130] > # ]
	I0717 01:20:30.801811   42204 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 01:20:30.801817   42204 command_runner.go:130] > [crio.metrics]
	I0717 01:20:30.801822   42204 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 01:20:30.801828   42204 command_runner.go:130] > enable_metrics = true
	I0717 01:20:30.801832   42204 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 01:20:30.801838   42204 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 01:20:30.801844   42204 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 01:20:30.801852   42204 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 01:20:30.801860   42204 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 01:20:30.801864   42204 command_runner.go:130] > # metrics_collectors = [
	I0717 01:20:30.801869   42204 command_runner.go:130] > # 	"operations",
	I0717 01:20:30.801874   42204 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 01:20:30.801880   42204 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 01:20:30.801884   42204 command_runner.go:130] > # 	"operations_errors",
	I0717 01:20:30.801890   42204 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 01:20:30.801894   42204 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 01:20:30.801901   42204 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 01:20:30.801905   42204 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 01:20:30.801912   42204 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 01:20:30.801915   42204 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 01:20:30.801919   42204 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 01:20:30.801926   42204 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0717 01:20:30.801930   42204 command_runner.go:130] > # 	"containers_oom_total",
	I0717 01:20:30.801935   42204 command_runner.go:130] > # 	"containers_oom",
	I0717 01:20:30.801939   42204 command_runner.go:130] > # 	"processes_defunct",
	I0717 01:20:30.801944   42204 command_runner.go:130] > # 	"operations_total",
	I0717 01:20:30.801949   42204 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 01:20:30.801955   42204 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 01:20:30.801960   42204 command_runner.go:130] > # 	"operations_errors_total",
	I0717 01:20:30.801966   42204 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 01:20:30.801971   42204 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 01:20:30.801977   42204 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 01:20:30.801981   42204 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 01:20:30.801990   42204 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 01:20:30.801998   42204 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 01:20:30.802003   42204 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0717 01:20:30.802007   42204 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0717 01:20:30.802010   42204 command_runner.go:130] > # ]
	I0717 01:20:30.802015   42204 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 01:20:30.802021   42204 command_runner.go:130] > # metrics_port = 9090
	I0717 01:20:30.802026   42204 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 01:20:30.802029   42204 command_runner.go:130] > # metrics_socket = ""
	I0717 01:20:30.802034   42204 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 01:20:30.802041   42204 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 01:20:30.802046   42204 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 01:20:30.802053   42204 command_runner.go:130] > # certificate on any modification event.
	I0717 01:20:30.802057   42204 command_runner.go:130] > # metrics_cert = ""
	I0717 01:20:30.802063   42204 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 01:20:30.802068   42204 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 01:20:30.802073   42204 command_runner.go:130] > # metrics_key = ""
	I0717 01:20:30.802079   42204 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 01:20:30.802084   42204 command_runner.go:130] > [crio.tracing]
	I0717 01:20:30.802089   42204 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 01:20:30.802095   42204 command_runner.go:130] > # enable_tracing = false
	I0717 01:20:30.802100   42204 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 01:20:30.802104   42204 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 01:20:30.802113   42204 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0717 01:20:30.802118   42204 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 01:20:30.802122   42204 command_runner.go:130] > # CRI-O NRI configuration.
	I0717 01:20:30.802128   42204 command_runner.go:130] > [crio.nri]
	I0717 01:20:30.802132   42204 command_runner.go:130] > # Globally enable or disable NRI.
	I0717 01:20:30.802138   42204 command_runner.go:130] > # enable_nri = false
	I0717 01:20:30.802142   42204 command_runner.go:130] > # NRI socket to listen on.
	I0717 01:20:30.802149   42204 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0717 01:20:30.802153   42204 command_runner.go:130] > # NRI plugin directory to use.
	I0717 01:20:30.802158   42204 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0717 01:20:30.802165   42204 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0717 01:20:30.802170   42204 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0717 01:20:30.802178   42204 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0717 01:20:30.802185   42204 command_runner.go:130] > # nri_disable_connections = false
	I0717 01:20:30.802190   42204 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0717 01:20:30.802196   42204 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0717 01:20:30.802201   42204 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0717 01:20:30.802208   42204 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0717 01:20:30.802214   42204 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 01:20:30.802220   42204 command_runner.go:130] > [crio.stats]
	I0717 01:20:30.802227   42204 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 01:20:30.802235   42204 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 01:20:30.802239   42204 command_runner.go:130] > # stats_collection_period = 0
	I0717 01:20:30.802331   42204 cni.go:84] Creating CNI manager for ""
	I0717 01:20:30.802341   42204 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 01:20:30.802349   42204 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:20:30.802374   42204 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-025900 NodeName:multinode-025900 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:20:30.802499   42204 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-025900"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:20:30.802582   42204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:20:30.813056   42204 command_runner.go:130] > kubeadm
	I0717 01:20:30.813076   42204 command_runner.go:130] > kubectl
	I0717 01:20:30.813082   42204 command_runner.go:130] > kubelet
	I0717 01:20:30.813194   42204 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:20:30.813243   42204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:20:30.823013   42204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0717 01:20:30.840213   42204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:20:30.857351   42204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 01:20:30.874806   42204 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I0717 01:20:30.878627   42204 command_runner.go:130] > 192.168.39.81	control-plane.minikube.internal
	I0717 01:20:30.878795   42204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:20:31.020748   42204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:20:31.036441   42204 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900 for IP: 192.168.39.81
	I0717 01:20:31.036468   42204 certs.go:194] generating shared ca certs ...
	I0717 01:20:31.036489   42204 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:20:31.036655   42204 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:20:31.036695   42204 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:20:31.036707   42204 certs.go:256] generating profile certs ...
	I0717 01:20:31.036797   42204 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/client.key
	I0717 01:20:31.036861   42204 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/apiserver.key.8d5dc9e3
	I0717 01:20:31.036894   42204 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/proxy-client.key
	I0717 01:20:31.036904   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 01:20:31.036917   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 01:20:31.036930   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 01:20:31.036950   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 01:20:31.036962   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 01:20:31.036979   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 01:20:31.036997   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 01:20:31.037014   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 01:20:31.037087   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:20:31.037117   42204 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:20:31.037126   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:20:31.037147   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:20:31.037168   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:20:31.037190   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:20:31.037224   42204 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:20:31.037248   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:20:31.037262   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem -> /usr/share/ca-certificates/11259.pem
	I0717 01:20:31.037273   42204 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> /usr/share/ca-certificates/112592.pem
	I0717 01:20:31.037775   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:20:31.064090   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:20:31.090542   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:20:31.116251   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:20:31.140711   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:20:31.164856   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:20:31.188861   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:20:31.213583   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/multinode-025900/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:20:31.238396   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:20:31.261949   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:20:31.285860   42204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:20:31.310140   42204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:20:31.327300   42204 ssh_runner.go:195] Run: openssl version
	I0717 01:20:31.333664   42204 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 01:20:31.333780   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:20:31.345376   42204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:20:31.349784   42204 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:20:31.349929   42204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:20:31.349976   42204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:20:31.355819   42204 command_runner.go:130] > b5213941
	I0717 01:20:31.355991   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:20:31.365918   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:20:31.377802   42204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:20:31.382529   42204 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:20:31.382576   42204 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:20:31.382637   42204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:20:31.388480   42204 command_runner.go:130] > 51391683
	I0717 01:20:31.388766   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:20:31.398637   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:20:31.409979   42204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:20:31.414470   42204 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:20:31.414539   42204 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:20:31.414605   42204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:20:31.420263   42204 command_runner.go:130] > 3ec20f2e
	I0717 01:20:31.420436   42204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:20:31.430096   42204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:20:31.434880   42204 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:20:31.434901   42204 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 01:20:31.434908   42204 command_runner.go:130] > Device: 253,1	Inode: 8386581     Links: 1
	I0717 01:20:31.434914   42204 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 01:20:31.434923   42204 command_runner.go:130] > Access: 2024-07-17 01:13:30.823315093 +0000
	I0717 01:20:31.434931   42204 command_runner.go:130] > Modify: 2024-07-17 01:13:30.823315093 +0000
	I0717 01:20:31.434938   42204 command_runner.go:130] > Change: 2024-07-17 01:13:30.823315093 +0000
	I0717 01:20:31.434950   42204 command_runner.go:130] >  Birth: 2024-07-17 01:13:30.823315093 +0000
	I0717 01:20:31.435016   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:20:31.440658   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.440723   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:20:31.446130   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.446335   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:20:31.452921   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.453068   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:20:31.458989   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.459055   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:20:31.464853   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.464897   42204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:20:31.470386   42204 command_runner.go:130] > Certificate will not expire
	I0717 01:20:31.470512   42204 kubeadm.go:392] StartCluster: {Name:multinode-025900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-025900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:20:31.470639   42204 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:20:31.470689   42204 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:20:31.505756   42204 command_runner.go:130] > 09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8
	I0717 01:20:31.505789   42204 command_runner.go:130] > 3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d
	I0717 01:20:31.505809   42204 command_runner.go:130] > fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779
	I0717 01:20:31.505816   42204 command_runner.go:130] > 02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e
	I0717 01:20:31.505821   42204 command_runner.go:130] > f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31
	I0717 01:20:31.505826   42204 command_runner.go:130] > 1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2
	I0717 01:20:31.505831   42204 command_runner.go:130] > 1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71
	I0717 01:20:31.505837   42204 command_runner.go:130] > d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994
	I0717 01:20:31.507116   42204 cri.go:89] found id: "09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8"
	I0717 01:20:31.507129   42204 cri.go:89] found id: "3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d"
	I0717 01:20:31.507133   42204 cri.go:89] found id: "fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779"
	I0717 01:20:31.507136   42204 cri.go:89] found id: "02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e"
	I0717 01:20:31.507138   42204 cri.go:89] found id: "f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31"
	I0717 01:20:31.507141   42204 cri.go:89] found id: "1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2"
	I0717 01:20:31.507143   42204 cri.go:89] found id: "1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71"
	I0717 01:20:31.507146   42204 cri.go:89] found id: "d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994"
	I0717 01:20:31.507148   42204 cri.go:89] found id: ""
	I0717 01:20:31.507200   42204 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.441993978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d66bcd5e-86a3-4c94-b0bd-ca4070184dcb name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.442460229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4b91a7f92bd2fab7abc5dd20ef35cc488727485a19ef84870f125c6900538b1,PodSandboxId:8f4b5614ab6e635cea43b197b327cbb9231cb3d101d191e13116bbc0ee1d1114,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721179271593826847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c33ce626336d2cbcd0200ceeb7fb464edcbbe4d98a471a5ada1e15c5dbf7e,PodSandboxId:6331990a218a184e85a8322811b5e7d10082cecb13938a7db44012a4914969a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721179238016431494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1178a3606a57c649c0f34b3a539ae72cfdaced76f1baca5b6b08b8493427335c,PodSandboxId:6fad3987071e446c4850c541a9edbf67d3b7b00008e4a2393bc38a3bcf229748,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179237928657323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7408f53ed54301d1ac909f70778685b3042b8f99f9ca891315130401239235,PodSandboxId:e4eff30722fe1c57b83b34507c6725ffd1c2065be392ce3e7e38d526ca3640ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179237851863073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b97863bc065363089a16f8ab329bc942107d351093a8a1cd48a377dbf1cf32,PodSandboxId:afd20e90b3502dd39dff3b56b6c4978650b72b92ab61a833b4806c4ccf616c4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179237857532686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775f970094737d7af3e03852693d060924c7798b3121a5be1210cb187117137,PodSandboxId:12d81e4cf6ee75f7c3f501f74a6aeb46b9b4d78e3c598c79ac6a86f994817792,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179234022727843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[string]string{io.kubernetes.container.hash: 49104185,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cc3d795f24816fab224839ea0e4f603d36a19e8f3b7b4a6bff27f7f2e32ee5,PodSandboxId:c35c8141a83adc5ba5ec15c3bc351ca26767958334210cbe036ad5bcae2e16f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179234005390458,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60bf9ba16ac61e0757355fbfca3e6bf4087417422c5e90eba9a6093060e0be0,PodSandboxId:7b21db87518f2d2c0acf5704263823bfc094b15d3020377262433e10bffe4642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179233972811787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fefc78079ae534d983463192be87a831caf618e85aaa27deed9ecb982be3216,PodSandboxId:7b3b2d201398feb1445a2b65456e9754b2e66a4a4d0f666c7f08033312985f8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179233945542834,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502dbe19e4152058e60707c6dfc875ca3187f80daa847ab208774f39d5df7037,PodSandboxId:148c1844263fa4291996fd06546ed76a4dc8bcd305ddec01f7a1e0ec8e8ef0d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721178909765522325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8,PodSandboxId:08cf884b4c94ddd4e0ecf91bc231859f112239948e12c9c2c53f1a8afb0641dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721178850624937144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d,PodSandboxId:45e8a55750e51f0a82a8688a98011d3bdad1a577d85b20f77c08255c77bb3080,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721178850558973294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779,PodSandboxId:f4485605cb6b057913de77d2598ad7c1132d494b61d7be36429abb860f022e7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721178838531620319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e,PodSandboxId:609ddc457d2600a8a307931c800e315694698abe9654ba6fa096f7b1a59117f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721178833677729246,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2,PodSandboxId:26b1f925c432f0dcfbb5bac0724a8d35261cfb567f21218c298fa532f93e4170,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721178814665534609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31,PodSandboxId:80a245704d7e983897bf4d03765b9bec645a9821e01e6b46e4a4ab394b8c93d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721178814668736209,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994,PodSandboxId:be47f38f9c9b81c7f8f1d028facde10189ab6f32400892697aba65b0c0dba416,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721178814660148682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56
003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71,PodSandboxId:36406e5752bc67f659c92e9244c3dc19aa05828ba343dba359963fd666e8bcc3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721178814663751962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 49104185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d66bcd5e-86a3-4c94-b0bd-ca4070184dcb name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.484104292Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a19320b0-d3e1-4260-a405-18b2f109842d name=/runtime.v1.RuntimeService/Version
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.484199894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a19320b0-d3e1-4260-a405-18b2f109842d name=/runtime.v1.RuntimeService/Version
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.485357128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cae8e1a-a133-47c1-adf8-15f6bfbe8565 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.486187264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179482486161754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cae8e1a-a133-47c1-adf8-15f6bfbe8565 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.486648564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b95d8e3c-6dd6-4bf5-9fe7-baa71978ba4c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.486700229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b95d8e3c-6dd6-4bf5-9fe7-baa71978ba4c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.487261777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4b91a7f92bd2fab7abc5dd20ef35cc488727485a19ef84870f125c6900538b1,PodSandboxId:8f4b5614ab6e635cea43b197b327cbb9231cb3d101d191e13116bbc0ee1d1114,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721179271593826847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c33ce626336d2cbcd0200ceeb7fb464edcbbe4d98a471a5ada1e15c5dbf7e,PodSandboxId:6331990a218a184e85a8322811b5e7d10082cecb13938a7db44012a4914969a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721179238016431494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1178a3606a57c649c0f34b3a539ae72cfdaced76f1baca5b6b08b8493427335c,PodSandboxId:6fad3987071e446c4850c541a9edbf67d3b7b00008e4a2393bc38a3bcf229748,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179237928657323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7408f53ed54301d1ac909f70778685b3042b8f99f9ca891315130401239235,PodSandboxId:e4eff30722fe1c57b83b34507c6725ffd1c2065be392ce3e7e38d526ca3640ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179237851863073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b97863bc065363089a16f8ab329bc942107d351093a8a1cd48a377dbf1cf32,PodSandboxId:afd20e90b3502dd39dff3b56b6c4978650b72b92ab61a833b4806c4ccf616c4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179237857532686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775f970094737d7af3e03852693d060924c7798b3121a5be1210cb187117137,PodSandboxId:12d81e4cf6ee75f7c3f501f74a6aeb46b9b4d78e3c598c79ac6a86f994817792,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179234022727843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[string]string{io.kubernetes.container.hash: 49104185,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cc3d795f24816fab224839ea0e4f603d36a19e8f3b7b4a6bff27f7f2e32ee5,PodSandboxId:c35c8141a83adc5ba5ec15c3bc351ca26767958334210cbe036ad5bcae2e16f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179234005390458,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60bf9ba16ac61e0757355fbfca3e6bf4087417422c5e90eba9a6093060e0be0,PodSandboxId:7b21db87518f2d2c0acf5704263823bfc094b15d3020377262433e10bffe4642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179233972811787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fefc78079ae534d983463192be87a831caf618e85aaa27deed9ecb982be3216,PodSandboxId:7b3b2d201398feb1445a2b65456e9754b2e66a4a4d0f666c7f08033312985f8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179233945542834,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502dbe19e4152058e60707c6dfc875ca3187f80daa847ab208774f39d5df7037,PodSandboxId:148c1844263fa4291996fd06546ed76a4dc8bcd305ddec01f7a1e0ec8e8ef0d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721178909765522325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8,PodSandboxId:08cf884b4c94ddd4e0ecf91bc231859f112239948e12c9c2c53f1a8afb0641dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721178850624937144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d,PodSandboxId:45e8a55750e51f0a82a8688a98011d3bdad1a577d85b20f77c08255c77bb3080,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721178850558973294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779,PodSandboxId:f4485605cb6b057913de77d2598ad7c1132d494b61d7be36429abb860f022e7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721178838531620319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e,PodSandboxId:609ddc457d2600a8a307931c800e315694698abe9654ba6fa096f7b1a59117f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721178833677729246,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2,PodSandboxId:26b1f925c432f0dcfbb5bac0724a8d35261cfb567f21218c298fa532f93e4170,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721178814665534609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31,PodSandboxId:80a245704d7e983897bf4d03765b9bec645a9821e01e6b46e4a4ab394b8c93d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721178814668736209,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994,PodSandboxId:be47f38f9c9b81c7f8f1d028facde10189ab6f32400892697aba65b0c0dba416,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721178814660148682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56
003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71,PodSandboxId:36406e5752bc67f659c92e9244c3dc19aa05828ba343dba359963fd666e8bcc3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721178814663751962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 49104185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b95d8e3c-6dd6-4bf5-9fe7-baa71978ba4c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.530259835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41ee7872-e62d-4b88-927a-cdb3b0124ef3 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.530531221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41ee7872-e62d-4b88-927a-cdb3b0124ef3 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.532663329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74b4acc2-d8ce-4968-b436-fc4d9ba83efc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.533192771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179482533160504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74b4acc2-d8ce-4968-b436-fc4d9ba83efc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.533724297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a11ca51-c15c-4273-816a-13c50a8c519f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.533807431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a11ca51-c15c-4273-816a-13c50a8c519f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.534228802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4b91a7f92bd2fab7abc5dd20ef35cc488727485a19ef84870f125c6900538b1,PodSandboxId:8f4b5614ab6e635cea43b197b327cbb9231cb3d101d191e13116bbc0ee1d1114,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721179271593826847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c33ce626336d2cbcd0200ceeb7fb464edcbbe4d98a471a5ada1e15c5dbf7e,PodSandboxId:6331990a218a184e85a8322811b5e7d10082cecb13938a7db44012a4914969a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721179238016431494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1178a3606a57c649c0f34b3a539ae72cfdaced76f1baca5b6b08b8493427335c,PodSandboxId:6fad3987071e446c4850c541a9edbf67d3b7b00008e4a2393bc38a3bcf229748,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179237928657323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7408f53ed54301d1ac909f70778685b3042b8f99f9ca891315130401239235,PodSandboxId:e4eff30722fe1c57b83b34507c6725ffd1c2065be392ce3e7e38d526ca3640ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179237851863073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b97863bc065363089a16f8ab329bc942107d351093a8a1cd48a377dbf1cf32,PodSandboxId:afd20e90b3502dd39dff3b56b6c4978650b72b92ab61a833b4806c4ccf616c4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179237857532686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775f970094737d7af3e03852693d060924c7798b3121a5be1210cb187117137,PodSandboxId:12d81e4cf6ee75f7c3f501f74a6aeb46b9b4d78e3c598c79ac6a86f994817792,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179234022727843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[string]string{io.kubernetes.container.hash: 49104185,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cc3d795f24816fab224839ea0e4f603d36a19e8f3b7b4a6bff27f7f2e32ee5,PodSandboxId:c35c8141a83adc5ba5ec15c3bc351ca26767958334210cbe036ad5bcae2e16f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179234005390458,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60bf9ba16ac61e0757355fbfca3e6bf4087417422c5e90eba9a6093060e0be0,PodSandboxId:7b21db87518f2d2c0acf5704263823bfc094b15d3020377262433e10bffe4642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179233972811787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fefc78079ae534d983463192be87a831caf618e85aaa27deed9ecb982be3216,PodSandboxId:7b3b2d201398feb1445a2b65456e9754b2e66a4a4d0f666c7f08033312985f8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179233945542834,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502dbe19e4152058e60707c6dfc875ca3187f80daa847ab208774f39d5df7037,PodSandboxId:148c1844263fa4291996fd06546ed76a4dc8bcd305ddec01f7a1e0ec8e8ef0d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721178909765522325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8,PodSandboxId:08cf884b4c94ddd4e0ecf91bc231859f112239948e12c9c2c53f1a8afb0641dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721178850624937144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d,PodSandboxId:45e8a55750e51f0a82a8688a98011d3bdad1a577d85b20f77c08255c77bb3080,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721178850558973294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779,PodSandboxId:f4485605cb6b057913de77d2598ad7c1132d494b61d7be36429abb860f022e7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721178838531620319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e,PodSandboxId:609ddc457d2600a8a307931c800e315694698abe9654ba6fa096f7b1a59117f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721178833677729246,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2,PodSandboxId:26b1f925c432f0dcfbb5bac0724a8d35261cfb567f21218c298fa532f93e4170,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721178814665534609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31,PodSandboxId:80a245704d7e983897bf4d03765b9bec645a9821e01e6b46e4a4ab394b8c93d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721178814668736209,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994,PodSandboxId:be47f38f9c9b81c7f8f1d028facde10189ab6f32400892697aba65b0c0dba416,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721178814660148682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56
003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71,PodSandboxId:36406e5752bc67f659c92e9244c3dc19aa05828ba343dba359963fd666e8bcc3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721178814663751962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 49104185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a11ca51-c15c-4273-816a-13c50a8c519f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.577296155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66608497-9c97-4234-8488-304df9960e67 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.577546076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66608497-9c97-4234-8488-304df9960e67 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.579052584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57181aa2-25d5-47e9-8cc8-d9bfb3ac6a03 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.579517328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721179482579493542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57181aa2-25d5-47e9-8cc8-d9bfb3ac6a03 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.580103925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8aa6904-c789-4c91-8c71-d9b0540b7624 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.580174438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8aa6904-c789-4c91-8c71-d9b0540b7624 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.580510499Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d4b91a7f92bd2fab7abc5dd20ef35cc488727485a19ef84870f125c6900538b1,PodSandboxId:8f4b5614ab6e635cea43b197b327cbb9231cb3d101d191e13116bbc0ee1d1114,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721179271593826847,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c33ce626336d2cbcd0200ceeb7fb464edcbbe4d98a471a5ada1e15c5dbf7e,PodSandboxId:6331990a218a184e85a8322811b5e7d10082cecb13938a7db44012a4914969a5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_RUNNING,CreatedAt:1721179238016431494,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1178a3606a57c649c0f34b3a539ae72cfdaced76f1baca5b6b08b8493427335c,PodSandboxId:6fad3987071e446c4850c541a9edbf67d3b7b00008e4a2393bc38a3bcf229748,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721179237928657323,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f7408f53ed54301d1ac909f70778685b3042b8f99f9ca891315130401239235,PodSandboxId:e4eff30722fe1c57b83b34507c6725ffd1c2065be392ce3e7e38d526ca3640ae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721179237851863073,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0b97863bc065363089a16f8ab329bc942107d351093a8a1cd48a377dbf1cf32,PodSandboxId:afd20e90b3502dd39dff3b56b6c4978650b72b92ab61a833b4806c4ccf616c4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179237857532686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kub
ernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a775f970094737d7af3e03852693d060924c7798b3121a5be1210cb187117137,PodSandboxId:12d81e4cf6ee75f7c3f501f74a6aeb46b9b4d78e3c598c79ac6a86f994817792,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721179234022727843,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[string]string{io.kubernetes.container.hash: 49104185,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9cc3d795f24816fab224839ea0e4f603d36a19e8f3b7b4a6bff27f7f2e32ee5,PodSandboxId:c35c8141a83adc5ba5ec15c3bc351ca26767958334210cbe036ad5bcae2e16f3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721179234005390458,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a60bf9ba16ac61e0757355fbfca3e6bf4087417422c5e90eba9a6093060e0be0,PodSandboxId:7b21db87518f2d2c0acf5704263823bfc094b15d3020377262433e10bffe4642,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721179233972811787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fefc78079ae534d983463192be87a831caf618e85aaa27deed9ecb982be3216,PodSandboxId:7b3b2d201398feb1445a2b65456e9754b2e66a4a4d0f666c7f08033312985f8a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721179233945542834,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502dbe19e4152058e60707c6dfc875ca3187f80daa847ab208774f39d5df7037,PodSandboxId:148c1844263fa4291996fd06546ed76a4dc8bcd305ddec01f7a1e0ec8e8ef0d0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721178909765522325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mn98f,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e227e80-de5e-4cc4-9c10-c4072dfb0ca6,},Annotations:map[string]string{io.kubernetes.container.hash: 342c1059,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8,PodSandboxId:08cf884b4c94ddd4e0ecf91bc231859f112239948e12c9c2c53f1a8afb0641dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721178850624937144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4xjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03f7291c-a0c7-42ce-b786-dc71e57b7792,},Annotations:map[string]string{io.kubernetes.container.hash: b78da48,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerP
ort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebdfeb8ae5c7b85a35e497aa249c67f9cc5dc9efaf0e5f5826e727e739cd18d,PodSandboxId:45e8a55750e51f0a82a8688a98011d3bdad1a577d85b20f77c08255c77bb3080,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721178850558973294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: df859607-80ac-43ae-a91c-d10ef995b6dc,},Annotations:map[string]string{io.kubernetes.container.hash: 27c35851,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779,PodSandboxId:f4485605cb6b057913de77d2598ad7c1132d494b61d7be36429abb860f022e7e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda,State:CONTAINER_EXITED,CreatedAt:1721178838531620319,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-97pxj,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: cf14e761-3074-4396-9730-f5dd63d79c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 787a85a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e,PodSandboxId:609ddc457d2600a8a307931c800e315694698abe9654ba6fa096f7b1a59117f1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721178833677729246,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4qbwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 0993395b-fc50-4564-b36e-83cc2a2113cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4d3642c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2,PodSandboxId:26b1f925c432f0dcfbb5bac0724a8d35261cfb567f21218c298fa532f93e4170,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721178814665534609,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-025900,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: 0c9c7b99a7270f653fe2931cf5abd6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31,PodSandboxId:80a245704d7e983897bf4d03765b9bec645a9821e01e6b46e4a4ab394b8c93d5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721178814668736209,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: a800b9f41be042b50f12b352cf5787b6,},Annotations:map[string]string{io.kubernetes.container.hash: 2af9c12a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994,PodSandboxId:be47f38f9c9b81c7f8f1d028facde10189ab6f32400892697aba65b0c0dba416,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721178814660148682,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb3d56
003a29e1eb05a5107768912f8e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71,PodSandboxId:36406e5752bc67f659c92e9244c3dc19aa05828ba343dba359963fd666e8bcc3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721178814663751962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-025900,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f23774afdb04c8dd10644dbbae2e078b,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 49104185,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8aa6904-c789-4c91-8c71-d9b0540b7624 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.605472445Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=92df9edf-d92d-4e1a-9418-872d14b79bf6 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:24:42 multinode-025900 crio[2861]: time="2024-07-17 01:24:42.605576706Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92df9edf-d92d-4e1a-9418-872d14b79bf6 name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d4b91a7f92bd2       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   8f4b5614ab6e6       busybox-fc5497c4f-mn98f
	452c33ce62633       a6add1e2d192f15a6223a6eb87ff0253f7eb36450c12b52340f4eb63fad7ceda                                      4 minutes ago       Running             kindnet-cni               1                   6331990a218a1       kindnet-97pxj
	1178a3606a57c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   6fad3987071e4       coredns-7db6d8ff4d-g4xjh
	c0b97863bc065       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   afd20e90b3502       storage-provisioner
	6f7408f53ed54       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      4 minutes ago       Running             kube-proxy                1                   e4eff30722fe1       kube-proxy-4qbwm
	a775f97009473       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   12d81e4cf6ee7       etcd-multinode-025900
	c9cc3d795f248       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      4 minutes ago       Running             kube-scheduler            1                   c35c8141a83ad       kube-scheduler-multinode-025900
	a60bf9ba16ac6       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            1                   7b21db87518f2       kube-apiserver-multinode-025900
	8fefc78079ae5       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   1                   7b3b2d201398f       kube-controller-manager-multinode-025900
	502dbe19e4152       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   148c1844263fa       busybox-fc5497c4f-mn98f
	09725ffcca266       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   08cf884b4c94d       coredns-7db6d8ff4d-g4xjh
	3ebdfeb8ae5c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   45e8a55750e51       storage-provisioner
	fee30035a5397       docker.io/kindest/kindnetd@sha256:a9acb91d619f824eef2ba4c546bfd3b11ce22ef74ab3749e7264799fa588b381    10 minutes ago      Exited              kindnet-cni               0                   f4485605cb6b0       kindnet-97pxj
	02159611beb77       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      10 minutes ago      Exited              kube-proxy                0                   609ddc457d260       kube-proxy-4qbwm
	f7539247491f8       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      11 minutes ago      Exited              kube-apiserver            0                   80a245704d7e9       kube-apiserver-multinode-025900
	1dac5d8c8d8c1       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      11 minutes ago      Exited              kube-controller-manager   0                   26b1f925c432f       kube-controller-manager-multinode-025900
	1925767a2697d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   36406e5752bc6       etcd-multinode-025900
	d35d12e08dd5f       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      11 minutes ago      Exited              kube-scheduler            0                   be47f38f9c9b8       kube-scheduler-multinode-025900
	
	
	==> coredns [09725ffcca26633bf33d602cbf5f624ea598f95ce65d67180f6b62ccb6d063b8] <==
	[INFO] 10.244.1.2:57488 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001825991s
	[INFO] 10.244.1.2:43947 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212704s
	[INFO] 10.244.1.2:48553 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006526s
	[INFO] 10.244.1.2:47808 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001206902s
	[INFO] 10.244.1.2:38784 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010314s
	[INFO] 10.244.1.2:49953 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101727s
	[INFO] 10.244.1.2:45251 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059416s
	[INFO] 10.244.0.3:51683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074265s
	[INFO] 10.244.0.3:42518 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084171s
	[INFO] 10.244.0.3:56977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000032928s
	[INFO] 10.244.0.3:51607 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000033581s
	[INFO] 10.244.1.2:45479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180609s
	[INFO] 10.244.1.2:35269 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127074s
	[INFO] 10.244.1.2:46044 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116711s
	[INFO] 10.244.1.2:59620 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081133s
	[INFO] 10.244.0.3:46330 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150543s
	[INFO] 10.244.0.3:34563 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132773s
	[INFO] 10.244.0.3:44980 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125994s
	[INFO] 10.244.0.3:34736 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142877s
	[INFO] 10.244.1.2:33226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018839s
	[INFO] 10.244.1.2:41501 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082637s
	[INFO] 10.244.1.2:58921 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112066s
	[INFO] 10.244.1.2:47694 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075927s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1178a3606a57c649c0f34b3a539ae72cfdaced76f1baca5b6b08b8493427335c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44723 - 5105 "HINFO IN 4128149899772464554.5178966456455149612. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010460164s
	
	
	==> describe nodes <==
	Name:               multinode-025900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=multinode-025900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_13_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:13:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025900
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:24:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:20:37 +0000   Wed, 17 Jul 2024 01:13:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:20:37 +0000   Wed, 17 Jul 2024 01:13:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:20:37 +0000   Wed, 17 Jul 2024 01:13:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:20:37 +0000   Wed, 17 Jul 2024 01:14:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.81
	  Hostname:    multinode-025900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3070bef5ea84fdf8ae62c5daaef29b1
	  System UUID:                e3070bef-5ea8-4fdf-8ae6-2c5daaef29b1
	  Boot ID:                    6fec25f7-991b-4a4b-ba54-36a13a7c7a24
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mn98f                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	  kube-system                 coredns-7db6d8ff4d-g4xjh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-025900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-97pxj                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-025900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-025900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-4qbwm                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-025900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node multinode-025900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node multinode-025900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node multinode-025900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                  kubelet          Node multinode-025900 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                  kubelet          Node multinode-025900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                  kubelet          Node multinode-025900 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-025900 event: Registered Node multinode-025900 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-025900 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node multinode-025900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node multinode-025900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node multinode-025900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                node-controller  Node multinode-025900 event: Registered Node multinode-025900 in Controller
	
	
	Name:               multinode-025900-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-025900-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=multinode-025900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T01_21_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:21:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-025900-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:22:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 01:21:47 +0000   Wed, 17 Jul 2024 01:23:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 01:21:47 +0000   Wed, 17 Jul 2024 01:23:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 01:21:47 +0000   Wed, 17 Jul 2024 01:23:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 01:21:47 +0000   Wed, 17 Jul 2024 01:23:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    multinode-025900-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ada8e85345f492e837685dd748ab793
	  System UUID:                7ada8e85-345f-492e-8376-85dd748ab793
	  Boot ID:                    0436973a-0d30-4ddd-b109-fbfc40815289
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w4x47    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kindnet-hj4p6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-mhxlb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  Starting                 9m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-025900-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-025900-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-025900-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m39s                  kubelet          Node multinode-025900-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-025900-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-025900-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-025900-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-025900-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-025900-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.059920] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068363] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.176641] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.144391] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.279478] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +4.097374] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +4.617116] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.069110] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.008456] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.088066] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.028616] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.124424] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	[  +5.779757] kauditd_printk_skb: 51 callbacks suppressed
	[Jul17 01:15] kauditd_printk_skb: 14 callbacks suppressed
	[Jul17 01:20] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.149349] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.168334] systemd-fstab-generator[2807]: Ignoring "noauto" option for root device
	[  +0.147617] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.280321] systemd-fstab-generator[2847]: Ignoring "noauto" option for root device
	[  +3.528721] systemd-fstab-generator[2945]: Ignoring "noauto" option for root device
	[  +2.093347] systemd-fstab-generator[3069]: Ignoring "noauto" option for root device
	[  +0.083975] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.081982] kauditd_printk_skb: 87 callbacks suppressed
	[ +14.308454] systemd-fstab-generator[3894]: Ignoring "noauto" option for root device
	[Jul17 01:21] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1925767a2697d3fb947baaba421e367c09374b379e84ddb0b39bda48f06a2a71] <==
	{"level":"info","ts":"2024-07-17T01:13:35.126307Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:13:35.12777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.81:2379"}
	{"level":"info","ts":"2024-07-17T01:13:35.130681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:13:35.143003Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:13:35.143118Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-07-17T01:14:41.889681Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.050743ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17368009634777542372 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:710790be407a2ee3>","response":"size:42"}
	{"level":"info","ts":"2024-07-17T01:14:41.889902Z","caller":"traceutil/trace.go:171","msg":"trace[1934674145] linearizableReadLoop","detail":"{readStateIndex:517; appliedIndex:515; }","duration":"177.500515ms","start":"2024-07-17T01:14:41.712383Z","end":"2024-07-17T01:14:41.889884Z","steps":["trace[1934674145] 'read index received'  (duration: 14.155138ms)","trace[1934674145] 'applied index is now lower than readState.Index'  (duration: 163.344814ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T01:14:41.890214Z","caller":"traceutil/trace.go:171","msg":"trace[469549090] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"177.885337ms","start":"2024-07-17T01:14:41.712319Z","end":"2024-07-17T01:14:41.890205Z","steps":["trace[469549090] 'process raft request'  (duration: 177.500159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:14:41.890444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.034491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-025900-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-17T01:14:41.890495Z","caller":"traceutil/trace.go:171","msg":"trace[432564414] range","detail":"{range_begin:/registry/minions/multinode-025900-m02; range_end:; response_count:1; response_revision:495; }","duration":"178.117195ms","start":"2024-07-17T01:14:41.712367Z","end":"2024-07-17T01:14:41.890484Z","steps":["trace[432564414] 'agreement among raft nodes before linearized reading'  (duration: 177.99274ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:15:39.173832Z","caller":"traceutil/trace.go:171","msg":"trace[1389313653] linearizableReadLoop","detail":"{readStateIndex:672; appliedIndex:671; }","duration":"214.88295ms","start":"2024-07-17T01:15:38.958905Z","end":"2024-07-17T01:15:39.173788Z","steps":["trace[1389313653] 'read index received'  (duration: 213.855902ms)","trace[1389313653] 'applied index is now lower than readState.Index'  (duration: 1.026355ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:15:39.17417Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.192276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-025900-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:15:39.174253Z","caller":"traceutil/trace.go:171","msg":"trace[1103145786] range","detail":"{range_begin:/registry/minions/multinode-025900-m03; range_end:; response_count:0; response_revision:632; }","duration":"215.352856ms","start":"2024-07-17T01:15:38.95888Z","end":"2024-07-17T01:15:39.174233Z","steps":["trace[1103145786] 'agreement among raft nodes before linearized reading'  (duration: 215.159508ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:15:39.174493Z","caller":"traceutil/trace.go:171","msg":"trace[1831000673] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"239.131206ms","start":"2024-07-17T01:15:38.93535Z","end":"2024-07-17T01:15:39.174481Z","steps":["trace[1831000673] 'process raft request'  (duration: 237.40356ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:15:39.174873Z","caller":"traceutil/trace.go:171","msg":"trace[452073269] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"177.360496ms","start":"2024-07-17T01:15:38.997497Z","end":"2024-07-17T01:15:39.174858Z","steps":["trace[452073269] 'process raft request'  (duration: 176.653439ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T01:18:55.343157Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T01:18:55.343265Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-025900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"]}
	{"level":"warn","ts":"2024-07-17T01:18:55.343356Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.81:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T01:18:55.343395Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.81:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T01:18:55.343468Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T01:18:55.343536Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T01:18:55.431648Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"81f5d9acb096f107","current-leader-member-id":"81f5d9acb096f107"}
	{"level":"info","ts":"2024-07-17T01:18:55.434166Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-07-17T01:18:55.434303Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-07-17T01:18:55.434333Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-025900","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"]}
	
	
	==> etcd [a775f970094737d7af3e03852693d060924c7798b3121a5be1210cb187117137] <==
	{"level":"info","ts":"2024-07-17T01:20:34.556462Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:20:34.556545Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:20:34.557137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 switched to configuration voters=(9364630335907098887)"}
	{"level":"info","ts":"2024-07-17T01:20:34.55728Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","added-peer-id":"81f5d9acb096f107","added-peer-peer-urls":["https://192.168.39.81:2380"]}
	{"level":"info","ts":"2024-07-17T01:20:34.557562Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:20:34.560039Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:20:34.587723Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:20:34.59118Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-07-17T01:20:34.591237Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-07-17T01:20:34.591416Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"81f5d9acb096f107","initial-advertise-peer-urls":["https://192.168.39.81:2380"],"listen-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.81:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:20:34.591458Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:20:35.865185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:20:35.86529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:20:35.865331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgPreVoteResp from 81f5d9acb096f107 at term 2"}
	{"level":"info","ts":"2024-07-17T01:20:35.865361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:20:35.865395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgVoteResp from 81f5d9acb096f107 at term 3"}
	{"level":"info","ts":"2024-07-17T01:20:35.865423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:20:35.865453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81f5d9acb096f107 elected leader 81f5d9acb096f107 at term 3"}
	{"level":"info","ts":"2024-07-17T01:20:35.87059Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"81f5d9acb096f107","local-member-attributes":"{Name:multinode-025900 ClientURLs:[https://192.168.39.81:2379]}","request-path":"/0/members/81f5d9acb096f107/attributes","cluster-id":"a77bf2d9a9fbb59e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:20:35.870673Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:20:35.870746Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:20:35.871485Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:20:35.871524Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:20:35.873452Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.81:2379"}
	{"level":"info","ts":"2024-07-17T01:20:35.873457Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:24:43 up 11 min,  0 users,  load average: 0.21, 0.36, 0.21
	Linux multinode-025900 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [452c33ce626336d2cbcd0200ceeb7fb464edcbbe4d98a471a5ada1e15c5dbf7e] <==
	I0717 01:23:39.063895       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:23:49.068559       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:23:49.068607       1 main.go:303] handling current node
	I0717 01:23:49.068623       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:23:49.068629       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:23:59.069098       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:23:59.069218       1 main.go:303] handling current node
	I0717 01:23:59.069250       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:23:59.069269       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:24:09.064615       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:24:09.064642       1 main.go:303] handling current node
	I0717 01:24:09.064655       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:24:09.064661       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:24:19.063124       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:24:19.063172       1 main.go:303] handling current node
	I0717 01:24:19.063190       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:24:19.063196       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:24:29.071625       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:24:29.071656       1 main.go:303] handling current node
	I0717 01:24:29.071670       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:24:29.071675       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:24:39.063684       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:24:39.063817       1 main.go:303] handling current node
	I0717 01:24:39.063864       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:24:39.063883       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fee30035a5397ff83bf2c0b6a33399f0e7da88a1229f0a8fe067dbfb6a07b779] <==
	I0717 01:18:09.554501       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	I0717 01:18:19.553118       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:18:19.553222       1 main.go:303] handling current node
	I0717 01:18:19.553242       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:18:19.553258       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:18:19.553447       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:18:19.553482       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	I0717 01:18:29.551734       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:18:29.551861       1 main.go:303] handling current node
	I0717 01:18:29.551894       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:18:29.551900       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:18:29.552101       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:18:29.552123       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	I0717 01:18:39.556859       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:18:39.557023       1 main.go:303] handling current node
	I0717 01:18:39.557057       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:18:39.557076       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:18:39.557199       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:18:39.557223       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	I0717 01:18:49.560044       1 main.go:299] Handling node with IPs: map[192.168.39.81:{}]
	I0717 01:18:49.560197       1 main.go:303] handling current node
	I0717 01:18:49.560234       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0717 01:18:49.560254       1 main.go:326] Node multinode-025900-m02 has CIDR [10.244.1.0/24] 
	I0717 01:18:49.560420       1 main.go:299] Handling node with IPs: map[192.168.39.68:{}]
	I0717 01:18:49.560445       1 main.go:326] Node multinode-025900-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a60bf9ba16ac61e0757355fbfca3e6bf4087417422c5e90eba9a6093060e0be0] <==
	I0717 01:20:37.222209       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 01:20:37.222669       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 01:20:37.223215       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 01:20:37.222908       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 01:20:37.223726       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 01:20:37.223796       1 aggregator.go:165] initial CRD sync complete...
	I0717 01:20:37.223817       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 01:20:37.223822       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 01:20:37.223827       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:20:37.228346       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:20:37.228593       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 01:20:37.229162       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:20:37.231270       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 01:20:37.236753       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 01:20:37.236792       1 policy_source.go:224] refreshing policies
	E0717 01:20:37.240117       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0717 01:20:37.248563       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:20:38.128651       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:20:39.163719       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:20:39.331691       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:20:39.351326       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:20:39.432750       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:20:39.439496       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:20:50.218793       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:20:50.260431       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [f7539247491f8361cb2a800e0adcc8d311eed5f77dc65eddeff7ffc89beffb31] <==
	W0717 01:18:55.367156       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367211       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367603       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367720       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367795       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367869       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367930       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.368554       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.368846       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.368916       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369063       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369126       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369235       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369379       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369421       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369486       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369537       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369567       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369813       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.367625       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.369684       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.370071       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.370325       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.370399       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 01:18:55.371023       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1dac5d8c8d8c1661aaa39b63c3c2d92f556e2d0280a8cf364f7a3beff50a95e2] <==
	I0717 01:14:41.892635       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025900-m02\" does not exist"
	I0717 01:14:41.951373       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025900-m02" podCIDRs=["10.244.1.0/24"]
	I0717 01:14:42.250279       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025900-m02"
	I0717 01:15:03.142876       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:15:05.498398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.013537ms"
	I0717 01:15:05.522407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.882551ms"
	I0717 01:15:05.546545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.879671ms"
	I0717 01:15:05.546633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.189µs"
	I0717 01:15:10.288832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.971202ms"
	I0717 01:15:10.288913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.823µs"
	I0717 01:15:10.748660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.467465ms"
	I0717 01:15:10.749417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.091µs"
	I0717 01:15:39.178534       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:15:39.181085       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025900-m03\" does not exist"
	I0717 01:15:39.209515       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025900-m03" podCIDRs=["10.244.2.0/24"]
	I0717 01:15:42.277033       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-025900-m03"
	I0717 01:16:00.350798       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:16:28.931873       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:16:29.903093       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025900-m03\" does not exist"
	I0717 01:16:29.903182       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:16:29.912484       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025900-m03" podCIDRs=["10.244.3.0/24"]
	I0717 01:16:49.757064       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:17:27.336519       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m03"
	I0717 01:17:27.385920       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.57673ms"
	I0717 01:17:27.386357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="182.934µs"
	
	
	==> kube-controller-manager [8fefc78079ae534d983463192be87a831caf618e85aaa27deed9ecb982be3216] <==
	I0717 01:21:17.140941       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025900-m02\" does not exist"
	I0717 01:21:17.149569       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025900-m02" podCIDRs=["10.244.1.0/24"]
	I0717 01:21:19.056540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.844µs"
	I0717 01:21:19.068853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.082µs"
	I0717 01:21:19.106797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.571µs"
	I0717 01:21:19.115830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.152µs"
	I0717 01:21:19.120536       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.431µs"
	I0717 01:21:20.113567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.482µs"
	I0717 01:21:37.489920       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:21:37.518551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="107.056µs"
	I0717 01:21:37.535574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.066µs"
	I0717 01:21:42.460178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.65861ms"
	I0717 01:21:42.460499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.886µs"
	I0717 01:21:55.820610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:21:57.127577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:21:57.127701       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-025900-m03\" does not exist"
	I0717 01:21:57.145555       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-025900-m03" podCIDRs=["10.244.2.0/24"]
	I0717 01:22:16.067524       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:22:21.608326       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-025900-m02"
	I0717 01:23:00.352642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.372085ms"
	I0717 01:23:00.353322       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.248µs"
	I0717 01:23:10.300694       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-cft79"
	I0717 01:23:10.328224       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-cft79"
	I0717 01:23:10.328318       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kspmt"
	I0717 01:23:10.348789       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kspmt"
	
	
	==> kube-proxy [02159611beb77b72f88f037018edb8c4dcba98bc873042b2348f49a974b4253e] <==
	I0717 01:13:53.946142       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:13:53.972503       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.81"]
	I0717 01:13:54.066058       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:13:54.066247       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:13:54.066273       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:13:54.069595       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:13:54.069925       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:13:54.070078       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:13:54.071122       1 config.go:192] "Starting service config controller"
	I0717 01:13:54.071152       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:13:54.071176       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:13:54.071180       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:13:54.071611       1 config.go:319] "Starting node config controller"
	I0717 01:13:54.071651       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:13:54.171323       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:13:54.171380       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:13:54.171829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [6f7408f53ed54301d1ac909f70778685b3042b8f99f9ca891315130401239235] <==
	I0717 01:20:38.169315       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:20:38.187036       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.81"]
	I0717 01:20:38.258089       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:20:38.258122       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:20:38.258138       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:20:38.264713       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:20:38.272521       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:20:38.272538       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:20:38.281052       1 config.go:192] "Starting service config controller"
	I0717 01:20:38.284058       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:20:38.284275       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:20:38.284321       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:20:38.284861       1 config.go:319] "Starting node config controller"
	I0717 01:20:38.285657       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:20:38.384492       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:20:38.384668       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:20:38.386395       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c9cc3d795f24816fab224839ea0e4f603d36a19e8f3b7b4a6bff27f7f2e32ee5] <==
	I0717 01:20:35.028848       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:20:37.184211       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:20:37.184439       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:20:37.184564       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:20:37.184593       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:20:37.201536       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:20:37.201576       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:20:37.203194       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:20:37.203417       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:20:37.203454       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:20:37.203487       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:20:37.303831       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d35d12e08dd5f99d8307485dda69263c87c063b3f3f6879c55b60bc5db183994] <==
	E0717 01:13:37.052609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 01:13:37.053134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 01:13:37.053176       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 01:13:37.962918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 01:13:37.963043       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 01:13:37.970159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 01:13:37.970202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 01:13:38.040623       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 01:13:38.040710       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:13:38.143749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 01:13:38.143844       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 01:13:38.186338       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 01:13:38.186424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 01:13:38.203914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 01:13:38.204098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 01:13:38.206125       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 01:13:38.206216       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 01:13:38.241908       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 01:13:38.242053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 01:13:38.259677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 01:13:38.259903       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 01:13:38.271828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 01:13:38.271918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0717 01:13:39.940172       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 01:18:55.358356       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.291853    3076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/df859607-80ac-43ae-a91c-d10ef995b6dc-tmp\") pod \"storage-provisioner\" (UID: \"df859607-80ac-43ae-a91c-d10ef995b6dc\") " pod="kube-system/storage-provisioner"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.292230    3076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf14e761-3074-4396-9730-f5dd63d79c1c-lib-modules\") pod \"kindnet-97pxj\" (UID: \"cf14e761-3074-4396-9730-f5dd63d79c1c\") " pod="kube-system/kindnet-97pxj"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.292693    3076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0993395b-fc50-4564-b36e-83cc2a2113cf-xtables-lock\") pod \"kube-proxy-4qbwm\" (UID: \"0993395b-fc50-4564-b36e-83cc2a2113cf\") " pod="kube-system/kube-proxy-4qbwm"
	Jul 17 01:20:37 multinode-025900 kubelet[3076]: I0717 01:20:37.293087    3076 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf14e761-3074-4396-9730-f5dd63d79c1c-xtables-lock\") pod \"kindnet-97pxj\" (UID: \"cf14e761-3074-4396-9730-f5dd63d79c1c\") " pod="kube-system/kindnet-97pxj"
	Jul 17 01:20:44 multinode-025900 kubelet[3076]: I0717 01:20:44.293061    3076 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 17 01:21:33 multinode-025900 kubelet[3076]: E0717 01:21:33.340338    3076 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:21:33 multinode-025900 kubelet[3076]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:21:33 multinode-025900 kubelet[3076]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:21:33 multinode-025900 kubelet[3076]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:21:33 multinode-025900 kubelet[3076]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:22:33 multinode-025900 kubelet[3076]: E0717 01:22:33.340357    3076 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:22:33 multinode-025900 kubelet[3076]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:22:33 multinode-025900 kubelet[3076]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:22:33 multinode-025900 kubelet[3076]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:22:33 multinode-025900 kubelet[3076]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:23:33 multinode-025900 kubelet[3076]: E0717 01:23:33.338687    3076 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:23:33 multinode-025900 kubelet[3076]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:23:33 multinode-025900 kubelet[3076]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:23:33 multinode-025900 kubelet[3076]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:23:33 multinode-025900 kubelet[3076]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 01:24:33 multinode-025900 kubelet[3076]: E0717 01:24:33.340400    3076 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 01:24:33 multinode-025900 kubelet[3076]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 01:24:33 multinode-025900 kubelet[3076]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 01:24:33 multinode-025900 kubelet[3076]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 01:24:33 multinode-025900 kubelet[3076]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:24:42.157488   44135 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19264-3908/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-025900 -n multinode-025900
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-025900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.22s)

                                                
                                    
x
+
TestPreload (298.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-055392 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0717 01:30:17.183289   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-055392 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m29.958937169s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-055392 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-055392 image pull gcr.io/k8s-minikube/busybox: (4.638985337s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-055392
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-055392: (7.268824161s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-055392 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0717 01:32:58.380047   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-055392 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.086293369s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-055392 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-17 01:33:26.4248934 +0000 UTC m=+4305.472774519
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-055392 -n test-preload-055392
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-055392 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-055392 logs -n 25: (1.057573603s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n multinode-025900 sudo cat                                       | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /home/docker/cp-test_multinode-025900-m03_multinode-025900.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-025900 cp multinode-025900-m03:/home/docker/cp-test.txt                       | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m02:/home/docker/cp-test_multinode-025900-m03_multinode-025900-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n                                                                 | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | multinode-025900-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-025900 ssh -n multinode-025900-m02 sudo cat                                   | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | /home/docker/cp-test_multinode-025900-m03_multinode-025900-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-025900 node stop m03                                                          | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	| node    | multinode-025900 node start                                                             | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC | 17 Jul 24 01:16 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-025900                                                                | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC |                     |
	| stop    | -p multinode-025900                                                                     | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:16 UTC |                     |
	| start   | -p multinode-025900                                                                     | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:18 UTC | 17 Jul 24 01:22 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-025900                                                                | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC |                     |
	| node    | multinode-025900 node delete                                                            | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC | 17 Jul 24 01:22 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-025900 stop                                                                   | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:22 UTC |                     |
	| start   | -p multinode-025900                                                                     | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:24 UTC | 17 Jul 24 01:27 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-025900                                                                | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:27 UTC |                     |
	| start   | -p multinode-025900-m02                                                                 | multinode-025900-m02 | jenkins | v1.33.1 | 17 Jul 24 01:27 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-025900-m03                                                                 | multinode-025900-m03 | jenkins | v1.33.1 | 17 Jul 24 01:27 UTC | 17 Jul 24 01:28 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-025900                                                                 | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC |                     |
	| delete  | -p multinode-025900-m03                                                                 | multinode-025900-m03 | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:28 UTC |
	| delete  | -p multinode-025900                                                                     | multinode-025900     | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:28 UTC |
	| start   | -p test-preload-055392                                                                  | test-preload-055392  | jenkins | v1.33.1 | 17 Jul 24 01:28 UTC | 17 Jul 24 01:32 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-055392 image pull                                                          | test-preload-055392  | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC | 17 Jul 24 01:32 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-055392                                                                  | test-preload-055392  | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC | 17 Jul 24 01:32 UTC |
	| start   | -p test-preload-055392                                                                  | test-preload-055392  | jenkins | v1.33.1 | 17 Jul 24 01:32 UTC | 17 Jul 24 01:33 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-055392 image list                                                          | test-preload-055392  | jenkins | v1.33.1 | 17 Jul 24 01:33 UTC | 17 Jul 24 01:33 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:32:12
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:32:12.170341   46933 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:32:12.170566   46933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:32:12.170581   46933 out.go:304] Setting ErrFile to fd 2...
	I0717 01:32:12.170588   46933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:32:12.171204   46933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:32:12.171886   46933 out.go:298] Setting JSON to false
	I0717 01:32:12.172730   46933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4474,"bootTime":1721175458,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:32:12.172789   46933 start.go:139] virtualization: kvm guest
	I0717 01:32:12.174862   46933 out.go:177] * [test-preload-055392] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:32:12.176357   46933 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:32:12.176378   46933 notify.go:220] Checking for updates...
	I0717 01:32:12.178706   46933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:32:12.179857   46933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:32:12.180887   46933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:32:12.181943   46933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:32:12.182917   46933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:32:12.184466   46933 config.go:182] Loaded profile config "test-preload-055392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0717 01:32:12.185109   46933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:32:12.185188   46933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:12.200723   46933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0717 01:32:12.201170   46933 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:12.201769   46933 main.go:141] libmachine: Using API Version  1
	I0717 01:32:12.201791   46933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:12.202168   46933 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:12.202359   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:32:12.204094   46933 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 01:32:12.205306   46933 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:32:12.205608   46933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:32:12.205642   46933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:12.220963   46933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36849
	I0717 01:32:12.221389   46933 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:12.221829   46933 main.go:141] libmachine: Using API Version  1
	I0717 01:32:12.221852   46933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:12.222124   46933 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:12.222331   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:32:12.256951   46933 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:32:12.258088   46933 start.go:297] selected driver: kvm2
	I0717 01:32:12.258108   46933 start.go:901] validating driver "kvm2" against &{Name:test-preload-055392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-055392 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:32:12.258233   46933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:32:12.259147   46933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:12.259227   46933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:32:12.273501   46933 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:32:12.273792   46933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:32:12.273846   46933 cni.go:84] Creating CNI manager for ""
	I0717 01:32:12.273859   46933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:32:12.273911   46933 start.go:340] cluster config:
	{Name:test-preload-055392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-055392 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:32:12.274010   46933 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:32:12.275516   46933 out.go:177] * Starting "test-preload-055392" primary control-plane node in "test-preload-055392" cluster
	I0717 01:32:12.276565   46933 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0717 01:32:12.429387   46933 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0717 01:32:12.429428   46933 cache.go:56] Caching tarball of preloaded images
	I0717 01:32:12.429581   46933 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0717 01:32:12.431398   46933 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0717 01:32:12.432571   46933 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0717 01:32:12.589745   46933 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0717 01:32:31.001742   46933 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0717 01:32:31.001834   46933 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0717 01:32:31.843369   46933 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0717 01:32:31.843483   46933 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/config.json ...
	I0717 01:32:31.843710   46933 start.go:360] acquireMachinesLock for test-preload-055392: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:32:31.843773   46933 start.go:364] duration metric: took 42.328µs to acquireMachinesLock for "test-preload-055392"
	I0717 01:32:31.843787   46933 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:32:31.843792   46933 fix.go:54] fixHost starting: 
	I0717 01:32:31.844113   46933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:32:31.844143   46933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:32:31.858480   46933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0717 01:32:31.858956   46933 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:32:31.859439   46933 main.go:141] libmachine: Using API Version  1
	I0717 01:32:31.859483   46933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:32:31.859788   46933 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:32:31.859940   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:32:31.860075   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetState
	I0717 01:32:31.861722   46933 fix.go:112] recreateIfNeeded on test-preload-055392: state=Stopped err=<nil>
	I0717 01:32:31.861748   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	W0717 01:32:31.861912   46933 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:32:31.864185   46933 out.go:177] * Restarting existing kvm2 VM for "test-preload-055392" ...
	I0717 01:32:31.865811   46933 main.go:141] libmachine: (test-preload-055392) Calling .Start
	I0717 01:32:31.865964   46933 main.go:141] libmachine: (test-preload-055392) Ensuring networks are active...
	I0717 01:32:31.866700   46933 main.go:141] libmachine: (test-preload-055392) Ensuring network default is active
	I0717 01:32:31.867029   46933 main.go:141] libmachine: (test-preload-055392) Ensuring network mk-test-preload-055392 is active
	I0717 01:32:31.867337   46933 main.go:141] libmachine: (test-preload-055392) Getting domain xml...
	I0717 01:32:31.867968   46933 main.go:141] libmachine: (test-preload-055392) Creating domain...
	I0717 01:32:33.044225   46933 main.go:141] libmachine: (test-preload-055392) Waiting to get IP...
	I0717 01:32:33.045316   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:33.045657   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:33.045710   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:33.045645   47033 retry.go:31] will retry after 193.43804ms: waiting for machine to come up
	I0717 01:32:33.241175   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:33.241546   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:33.241574   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:33.241493   47033 retry.go:31] will retry after 305.628379ms: waiting for machine to come up
	I0717 01:32:33.548711   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:33.549148   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:33.549173   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:33.549092   47033 retry.go:31] will retry after 313.10668ms: waiting for machine to come up
	I0717 01:32:33.863542   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:33.864006   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:33.864029   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:33.863969   47033 retry.go:31] will retry after 398.686907ms: waiting for machine to come up
	I0717 01:32:34.264657   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:34.265059   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:34.265085   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:34.265012   47033 retry.go:31] will retry after 630.449307ms: waiting for machine to come up
	I0717 01:32:34.896744   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:34.897128   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:34.897154   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:34.897069   47033 retry.go:31] will retry after 605.21026ms: waiting for machine to come up
	I0717 01:32:35.503714   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:35.504066   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:35.504094   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:35.504023   47033 retry.go:31] will retry after 901.491952ms: waiting for machine to come up
	I0717 01:32:36.406640   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:36.407047   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:36.407074   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:36.407018   47033 retry.go:31] will retry after 1.294513075s: waiting for machine to come up
	I0717 01:32:37.702654   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:37.703014   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:37.703037   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:37.702966   47033 retry.go:31] will retry after 1.501116132s: waiting for machine to come up
	I0717 01:32:39.206639   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:39.207117   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:39.207144   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:39.207073   47033 retry.go:31] will retry after 1.92539725s: waiting for machine to come up
	I0717 01:32:41.135085   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:41.135465   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:41.135493   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:41.135435   47033 retry.go:31] will retry after 2.1625685s: waiting for machine to come up
	I0717 01:32:43.300038   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:43.300393   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:43.300422   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:43.300360   47033 retry.go:31] will retry after 2.651036761s: waiting for machine to come up
	I0717 01:32:45.953979   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:45.954475   46933 main.go:141] libmachine: (test-preload-055392) DBG | unable to find current IP address of domain test-preload-055392 in network mk-test-preload-055392
	I0717 01:32:45.954500   46933 main.go:141] libmachine: (test-preload-055392) DBG | I0717 01:32:45.954452   47033 retry.go:31] will retry after 4.04965258s: waiting for machine to come up
	I0717 01:32:50.007694   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.008088   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has current primary IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.008101   46933 main.go:141] libmachine: (test-preload-055392) Found IP for machine: 192.168.39.157
	I0717 01:32:50.008110   46933 main.go:141] libmachine: (test-preload-055392) Reserving static IP address...
	I0717 01:32:50.008483   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "test-preload-055392", mac: "52:54:00:0a:c5:d0", ip: "192.168.39.157"} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.008508   46933 main.go:141] libmachine: (test-preload-055392) DBG | skip adding static IP to network mk-test-preload-055392 - found existing host DHCP lease matching {name: "test-preload-055392", mac: "52:54:00:0a:c5:d0", ip: "192.168.39.157"}
	I0717 01:32:50.008517   46933 main.go:141] libmachine: (test-preload-055392) Reserved static IP address: 192.168.39.157
	I0717 01:32:50.008530   46933 main.go:141] libmachine: (test-preload-055392) Waiting for SSH to be available...
	I0717 01:32:50.008546   46933 main.go:141] libmachine: (test-preload-055392) DBG | Getting to WaitForSSH function...
	I0717 01:32:50.010213   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.010535   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.010585   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.010655   46933 main.go:141] libmachine: (test-preload-055392) DBG | Using SSH client type: external
	I0717 01:32:50.010701   46933 main.go:141] libmachine: (test-preload-055392) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/test-preload-055392/id_rsa (-rw-------)
	I0717 01:32:50.010747   46933 main.go:141] libmachine: (test-preload-055392) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/test-preload-055392/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:32:50.010765   46933 main.go:141] libmachine: (test-preload-055392) DBG | About to run SSH command:
	I0717 01:32:50.010774   46933 main.go:141] libmachine: (test-preload-055392) DBG | exit 0
	I0717 01:32:50.134344   46933 main.go:141] libmachine: (test-preload-055392) DBG | SSH cmd err, output: <nil>: 
	I0717 01:32:50.134738   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetConfigRaw
	I0717 01:32:50.135352   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetIP
	I0717 01:32:50.137939   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.138312   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.138339   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.138612   46933 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/config.json ...
	I0717 01:32:50.138824   46933 machine.go:94] provisionDockerMachine start ...
	I0717 01:32:50.138843   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:32:50.139044   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:32:50.141218   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.141470   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.141490   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.141665   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:32:50.141825   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:50.141955   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:50.142060   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:32:50.142176   46933 main.go:141] libmachine: Using SSH client type: native
	I0717 01:32:50.142345   46933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0717 01:32:50.142354   46933 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:32:50.246916   46933 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:32:50.246937   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetMachineName
	I0717 01:32:50.247147   46933 buildroot.go:166] provisioning hostname "test-preload-055392"
	I0717 01:32:50.247172   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetMachineName
	I0717 01:32:50.247365   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:32:50.249731   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.250082   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.250110   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.250267   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:32:50.250446   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:50.250614   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:50.250729   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:32:50.250913   46933 main.go:141] libmachine: Using SSH client type: native
	I0717 01:32:50.251072   46933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0717 01:32:50.251085   46933 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-055392 && echo "test-preload-055392" | sudo tee /etc/hostname
	I0717 01:32:50.368773   46933 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-055392
	
	I0717 01:32:50.368800   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:32:50.371255   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.371491   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.371512   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.371638   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:32:50.371838   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:50.372018   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:50.372156   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:32:50.372285   46933 main.go:141] libmachine: Using SSH client type: native
	I0717 01:32:50.372481   46933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0717 01:32:50.372500   46933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-055392' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-055392/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-055392' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:32:50.483703   46933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:32:50.483728   46933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:32:50.483752   46933 buildroot.go:174] setting up certificates
	I0717 01:32:50.483761   46933 provision.go:84] configureAuth start
	I0717 01:32:50.483769   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetMachineName
	I0717 01:32:50.484029   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetIP
	I0717 01:32:50.486595   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.486955   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.487003   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.487162   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:32:50.489152   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.489384   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.489409   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.489520   46933 provision.go:143] copyHostCerts
	I0717 01:32:50.489590   46933 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:32:50.489600   46933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:32:50.489698   46933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:32:50.489805   46933 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:32:50.489816   46933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:32:50.489853   46933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:32:50.489927   46933 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:32:50.489937   46933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:32:50.489969   46933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:32:50.490040   46933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.test-preload-055392 san=[127.0.0.1 192.168.39.157 localhost minikube test-preload-055392]
	I0717 01:32:50.584069   46933 provision.go:177] copyRemoteCerts
	I0717 01:32:50.584125   46933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:32:50.584154   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:32:50.586510   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.586888   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.586910   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.587119   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:32:50.587316   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:50.587467   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:32:50.587604   46933 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/test-preload-055392/id_rsa Username:docker}
	I0717 01:32:50.668269   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:32:50.692173   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 01:32:50.715660   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:32:50.738121   46933 provision.go:87] duration metric: took 254.347509ms to configureAuth
	I0717 01:32:50.738151   46933 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:32:50.738339   46933 config.go:182] Loaded profile config "test-preload-055392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0717 01:32:50.738405   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:32:50.740594   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.740968   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.740987   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.741178   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:32:50.741342   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:50.741443   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:50.741524   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:32:50.741606   46933 main.go:141] libmachine: Using SSH client type: native
	I0717 01:32:50.741760   46933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0717 01:32:50.741777   46933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:32:50.994942   46933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:32:50.994971   46933 machine.go:97] duration metric: took 856.134125ms to provisionDockerMachine
	I0717 01:32:50.994985   46933 start.go:293] postStartSetup for "test-preload-055392" (driver="kvm2")
	I0717 01:32:50.994995   46933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:32:50.995008   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:32:50.995273   46933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:32:50.995298   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:32:50.997571   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.997944   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:50.997974   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:50.998099   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:32:50.998281   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:50.998443   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:32:50.998588   46933 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/test-preload-055392/id_rsa Username:docker}
	I0717 01:32:51.080718   46933 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:32:51.084842   46933 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:32:51.084863   46933 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:32:51.084929   46933 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:32:51.085051   46933 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:32:51.085170   46933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:32:51.094263   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:32:51.118332   46933 start.go:296] duration metric: took 123.332531ms for postStartSetup
	I0717 01:32:51.118375   46933 fix.go:56] duration metric: took 19.27457886s for fixHost
	I0717 01:32:51.118393   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:32:51.121023   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:51.121324   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:51.121353   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:51.121491   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:32:51.121693   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:51.121864   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:51.122002   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:32:51.122162   46933 main.go:141] libmachine: Using SSH client type: native
	I0717 01:32:51.122315   46933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I0717 01:32:51.122326   46933 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:32:51.223473   46933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721179971.198789760
	
	I0717 01:32:51.223516   46933 fix.go:216] guest clock: 1721179971.198789760
	I0717 01:32:51.223527   46933 fix.go:229] Guest: 2024-07-17 01:32:51.19878976 +0000 UTC Remote: 2024-07-17 01:32:51.118378413 +0000 UTC m=+38.981314429 (delta=80.411347ms)
	I0717 01:32:51.223566   46933 fix.go:200] guest clock delta is within tolerance: 80.411347ms
	I0717 01:32:51.223573   46933 start.go:83] releasing machines lock for "test-preload-055392", held for 19.379790069s
	I0717 01:32:51.223598   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:32:51.223850   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetIP
	I0717 01:32:51.226579   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:51.226869   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:51.226898   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:51.226994   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:32:51.227465   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:32:51.227658   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:32:51.227767   46933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:32:51.227818   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:32:51.227852   46933 ssh_runner.go:195] Run: cat /version.json
	I0717 01:32:51.227876   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:32:51.230351   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:51.230568   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:51.230699   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:51.230728   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:51.230871   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:32:51.231116   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:51.231121   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:51.231154   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:51.231245   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:32:51.231337   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:32:51.231437   46933 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/test-preload-055392/id_rsa Username:docker}
	I0717 01:32:51.231491   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:32:51.231617   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:32:51.231784   46933 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/test-preload-055392/id_rsa Username:docker}
	I0717 01:32:51.344714   46933 ssh_runner.go:195] Run: systemctl --version
	I0717 01:32:51.350684   46933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:32:51.496512   46933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:32:51.502949   46933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:32:51.503014   46933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:32:51.518208   46933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:32:51.518227   46933 start.go:495] detecting cgroup driver to use...
	I0717 01:32:51.518278   46933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:32:51.536402   46933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:32:51.550989   46933 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:32:51.551042   46933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:32:51.564815   46933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:32:51.578865   46933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:32:51.700406   46933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:32:51.834232   46933 docker.go:233] disabling docker service ...
	I0717 01:32:51.834303   46933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:32:51.848095   46933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:32:51.861093   46933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:32:51.992077   46933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:32:52.106303   46933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:32:52.120727   46933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:32:52.138468   46933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0717 01:32:52.138612   46933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:32:52.149161   46933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:32:52.149230   46933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:32:52.159703   46933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:32:52.170111   46933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:32:52.180946   46933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:32:52.192178   46933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:32:52.203186   46933 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:32:52.221329   46933 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:32:52.232505   46933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:32:52.242728   46933 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:32:52.242808   46933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:32:52.258685   46933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:32:52.270811   46933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:32:52.397398   46933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:32:52.531851   46933 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:32:52.531928   46933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:32:52.537261   46933 start.go:563] Will wait 60s for crictl version
	I0717 01:32:52.537319   46933 ssh_runner.go:195] Run: which crictl
	I0717 01:32:52.541021   46933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:32:52.581115   46933 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:32:52.581185   46933 ssh_runner.go:195] Run: crio --version
	I0717 01:32:52.612135   46933 ssh_runner.go:195] Run: crio --version
	I0717 01:32:52.642200   46933 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0717 01:32:52.643584   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetIP
	I0717 01:32:52.646111   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:52.646403   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:32:52.646432   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:32:52.646659   46933 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:32:52.650886   46933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:32:52.663784   46933 kubeadm.go:883] updating cluster {Name:test-preload-055392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-055392 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:32:52.663890   46933 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0717 01:32:52.663944   46933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:32:52.700868   46933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0717 01:32:52.700938   46933 ssh_runner.go:195] Run: which lz4
	I0717 01:32:52.705062   46933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:32:52.709391   46933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:32:52.709411   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0717 01:32:54.261081   46933 crio.go:462] duration metric: took 1.556040752s to copy over tarball
	I0717 01:32:54.261171   46933 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:32:56.629255   46933 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.368051587s)
	I0717 01:32:56.629294   46933 crio.go:469] duration metric: took 2.368181151s to extract the tarball
	I0717 01:32:56.629303   46933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:32:56.670311   46933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:32:56.712958   46933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0717 01:32:56.712982   46933 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:32:56.713035   46933 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:32:56.713060   46933 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0717 01:32:56.713112   46933 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0717 01:32:56.713139   46933 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0717 01:32:56.713159   46933 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0717 01:32:56.713188   46933 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0717 01:32:56.713218   46933 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 01:32:56.713118   46933 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 01:32:56.714447   46933 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0717 01:32:56.714459   46933 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0717 01:32:56.714462   46933 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:32:56.714485   46933 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0717 01:32:56.714485   46933 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 01:32:56.714460   46933 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 01:32:56.714507   46933 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0717 01:32:56.714448   46933 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0717 01:32:56.920741   46933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0717 01:32:56.937808   46933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0717 01:32:56.962251   46933 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0717 01:32:56.962294   46933 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0717 01:32:56.962346   46933 ssh_runner.go:195] Run: which crictl
	I0717 01:32:56.989850   46933 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0717 01:32:56.989896   46933 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0717 01:32:56.989926   46933 ssh_runner.go:195] Run: which crictl
	I0717 01:32:56.989928   46933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0717 01:32:57.005611   46933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0717 01:32:57.007353   46933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0717 01:32:57.020174   46933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 01:32:57.021330   46933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0717 01:32:57.040319   46933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0717 01:32:57.040367   46933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0717 01:32:57.040475   46933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0717 01:32:57.060756   46933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0717 01:32:57.121863   46933 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0717 01:32:57.121912   46933 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0717 01:32:57.121985   46933 ssh_runner.go:195] Run: which crictl
	I0717 01:32:57.132759   46933 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0717 01:32:57.132800   46933 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0717 01:32:57.132832   46933 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0717 01:32:57.132865   46933 ssh_runner.go:195] Run: which crictl
	I0717 01:32:57.132865   46933 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 01:32:57.132970   46933 ssh_runner.go:195] Run: which crictl
	I0717 01:32:57.178529   46933 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0717 01:32:57.178586   46933 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0717 01:32:57.178638   46933 ssh_runner.go:195] Run: which crictl
	I0717 01:32:57.190633   46933 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0717 01:32:57.190674   46933 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0717 01:32:57.190723   46933 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0717 01:32:57.190791   46933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0717 01:32:57.190894   46933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0717 01:32:57.196651   46933 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0717 01:32:57.196688   46933 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0717 01:32:57.196718   46933 ssh_runner.go:195] Run: which crictl
	I0717 01:32:57.196743   46933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0717 01:32:57.196804   46933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0717 01:32:57.196814   46933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0717 01:32:57.196848   46933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0717 01:32:57.903902   46933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:33:00.097317   46933 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.906569521s)
	I0717 01:33:00.097347   46933 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0717 01:33:00.097390   46933 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.906474165s)
	I0717 01:33:00.097436   46933 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0717 01:33:00.097445   46933 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0717 01:33:00.097448   46933 ssh_runner.go:235] Completed: which crictl: (2.900705197s)
	I0717 01:33:00.097493   46933 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0717 01:33:00.097519   46933 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (2.900749931s)
	I0717 01:33:00.097495   46933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0717 01:33:00.097567   46933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0717 01:33:00.097581   46933 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.90075545s)
	I0717 01:33:00.097615   46933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0717 01:33:00.097613   46933 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.900747037s)
	I0717 01:33:00.097650   46933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0717 01:33:00.097653   46933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0717 01:33:00.097678   46933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0717 01:33:00.097678   46933 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.900838449s)
	I0717 01:33:00.097707   46933 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.193774832s)
	I0717 01:33:00.097714   46933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0717 01:33:00.097726   46933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0717 01:33:00.097799   46933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0717 01:33:00.962199   46933 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0717 01:33:00.962196   46933 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0717 01:33:00.962241   46933 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0717 01:33:00.962242   46933 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0717 01:33:00.962288   46933 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0717 01:33:00.962299   46933 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0717 01:33:00.962383   46933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0717 01:33:00.962392   46933 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0717 01:33:00.962461   46933 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0717 01:33:01.704911   46933 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0717 01:33:01.704950   46933 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0717 01:33:01.704985   46933 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0717 01:33:01.705000   46933 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0717 01:33:01.845339   46933 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0717 01:33:01.845382   46933 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0717 01:33:01.845430   46933 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0717 01:33:02.596269   46933 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0717 01:33:02.596327   46933 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0717 01:33:02.596394   46933 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0717 01:33:03.037440   46933 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0717 01:33:03.037494   46933 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0717 01:33:03.037570   46933 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0717 01:33:05.186224   46933 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.148625992s)
	I0717 01:33:05.186258   46933 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0717 01:33:05.186291   46933 cache_images.go:123] Successfully loaded all cached images
	I0717 01:33:05.186298   46933 cache_images.go:92] duration metric: took 8.473303489s to LoadCachedImages
	I0717 01:33:05.186308   46933 kubeadm.go:934] updating node { 192.168.39.157 8443 v1.24.4 crio true true} ...
	I0717 01:33:05.186426   46933 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-055392 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-055392 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:33:05.186524   46933 ssh_runner.go:195] Run: crio config
	I0717 01:33:05.229800   46933 cni.go:84] Creating CNI manager for ""
	I0717 01:33:05.229826   46933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:33:05.229842   46933 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:33:05.229867   46933 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.157 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-055392 NodeName:test-preload-055392 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:33:05.230040   46933 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-055392"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.157"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:33:05.230108   46933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0717 01:33:05.240187   46933 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:33:05.240257   46933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:33:05.249633   46933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0717 01:33:05.265945   46933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:33:05.281808   46933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0717 01:33:05.298519   46933 ssh_runner.go:195] Run: grep 192.168.39.157	control-plane.minikube.internal$ /etc/hosts
	I0717 01:33:05.302454   46933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:33:05.314350   46933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:33:05.441662   46933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:33:05.458393   46933 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392 for IP: 192.168.39.157
	I0717 01:33:05.458416   46933 certs.go:194] generating shared ca certs ...
	I0717 01:33:05.458431   46933 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:05.458608   46933 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:33:05.458649   46933 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:33:05.458659   46933 certs.go:256] generating profile certs ...
	I0717 01:33:05.458739   46933 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/client.key
	I0717 01:33:05.458798   46933 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/apiserver.key.3708c330
	I0717 01:33:05.458833   46933 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/proxy-client.key
	I0717 01:33:05.458941   46933 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:33:05.458965   46933 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:33:05.458974   46933 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:33:05.458997   46933 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:33:05.459022   46933 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:33:05.459042   46933 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:33:05.459080   46933 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:33:05.459694   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:33:05.499790   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:33:05.525707   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:33:05.558906   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:33:05.582431   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 01:33:05.604680   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:33:05.629367   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:33:05.661950   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:33:05.700269   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:33:05.723184   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:33:05.746205   46933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:33:05.769106   46933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:33:05.785590   46933 ssh_runner.go:195] Run: openssl version
	I0717 01:33:05.791637   46933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:33:05.802077   46933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:33:05.806457   46933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:33:05.806504   46933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:33:05.812260   46933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:33:05.822330   46933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:33:05.832354   46933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:33:05.836723   46933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:33:05.836758   46933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:33:05.842286   46933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:33:05.852454   46933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:33:05.862614   46933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:33:05.867074   46933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:33:05.867134   46933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:33:05.872557   46933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:33:05.882973   46933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:33:05.887612   46933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:33:05.893498   46933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:33:05.899310   46933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:33:05.905269   46933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:33:05.910874   46933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:33:05.916438   46933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:33:05.922143   46933 kubeadm.go:392] StartCluster: {Name:test-preload-055392 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-055392 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:33:05.922214   46933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:33:05.922261   46933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:33:05.959639   46933 cri.go:89] found id: ""
	I0717 01:33:05.959708   46933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:33:05.969561   46933 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:33:05.969577   46933 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:33:05.969633   46933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:33:05.979052   46933 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:33:05.979494   46933 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-055392" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:33:05.979596   46933 kubeconfig.go:62] /home/jenkins/minikube-integration/19264-3908/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-055392" cluster setting kubeconfig missing "test-preload-055392" context setting]
	I0717 01:33:05.979899   46933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:05.980444   46933 kapi.go:59] client config for test-preload-055392: &rest.Config{Host:"https://192.168.39.157:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/client.crt", KeyFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/client.key", CAFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 01:33:05.981058   46933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:33:05.991089   46933 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.157
	I0717 01:33:05.991123   46933 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:33:05.991138   46933 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:33:05.991191   46933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:33:06.032646   46933 cri.go:89] found id: ""
	I0717 01:33:06.032727   46933 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:33:06.048616   46933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:33:06.058221   46933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:33:06.058241   46933 kubeadm.go:157] found existing configuration files:
	
	I0717 01:33:06.058289   46933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:33:06.067051   46933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:33:06.067101   46933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:33:06.076120   46933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:33:06.085074   46933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:33:06.085123   46933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:33:06.094474   46933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:33:06.103840   46933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:33:06.103910   46933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:33:06.114066   46933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:33:06.124000   46933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:33:06.124053   46933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:33:06.134142   46933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:33:06.144360   46933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:33:06.238530   46933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:33:06.895408   46933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:33:07.156647   46933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:33:07.231147   46933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:33:07.347056   46933 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:33:07.347142   46933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:07.848220   46933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:08.347484   46933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:08.372057   46933 api_server.go:72] duration metric: took 1.025001877s to wait for apiserver process to appear ...
	I0717 01:33:08.372087   46933 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:33:08.372106   46933 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I0717 01:33:08.372513   46933 api_server.go:269] stopped: https://192.168.39.157:8443/healthz: Get "https://192.168.39.157:8443/healthz": dial tcp 192.168.39.157:8443: connect: connection refused
	I0717 01:33:08.872304   46933 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I0717 01:33:12.885418   46933 api_server.go:279] https://192.168.39.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:33:12.885449   46933 api_server.go:103] status: https://192.168.39.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:33:12.885464   46933 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I0717 01:33:12.898135   46933 api_server.go:279] https://192.168.39.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:33:12.898168   46933 api_server.go:103] status: https://192.168.39.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:33:13.372733   46933 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I0717 01:33:13.380502   46933 api_server.go:279] https://192.168.39.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:33:13.380532   46933 api_server.go:103] status: https://192.168.39.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:33:13.873148   46933 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I0717 01:33:13.881245   46933 api_server.go:279] https://192.168.39.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:33:13.881280   46933 api_server.go:103] status: https://192.168.39.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:33:14.372856   46933 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I0717 01:33:14.378212   46933 api_server.go:279] https://192.168.39.157:8443/healthz returned 200:
	ok
	I0717 01:33:14.384609   46933 api_server.go:141] control plane version: v1.24.4
	I0717 01:33:14.384639   46933 api_server.go:131] duration metric: took 6.012544887s to wait for apiserver health ...
	I0717 01:33:14.384650   46933 cni.go:84] Creating CNI manager for ""
	I0717 01:33:14.384659   46933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:33:14.386598   46933 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:33:14.387919   46933 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:33:14.398838   46933 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:33:14.419097   46933 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:33:14.429421   46933 system_pods.go:59] 8 kube-system pods found
	I0717 01:33:14.429448   46933 system_pods.go:61] "coredns-6d4b75cb6d-9l7n2" [45e63a88-4fa7-47f2-b15d-b519d7fe20bc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:33:14.429454   46933 system_pods.go:61] "coredns-6d4b75cb6d-sl9mg" [ae49babf-6cf8-45c8-ab2a-57365b4d0507] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:33:14.429460   46933 system_pods.go:61] "etcd-test-preload-055392" [13069266-220e-4d22-8e5a-06f12dab3fb2] Running
	I0717 01:33:14.429466   46933 system_pods.go:61] "kube-apiserver-test-preload-055392" [9f52d172-e8e0-4a74-9da2-e71613477e9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:33:14.429470   46933 system_pods.go:61] "kube-controller-manager-test-preload-055392" [d695da31-67a1-4590-9311-94bb001deee7] Running
	I0717 01:33:14.429475   46933 system_pods.go:61] "kube-proxy-zwgsj" [cb875949-29e3-4e64-9c07-2a43ec728033] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:33:14.429482   46933 system_pods.go:61] "kube-scheduler-test-preload-055392" [ac090b18-b59e-4cec-b239-5d85e4325abb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:33:14.429487   46933 system_pods.go:61] "storage-provisioner" [543ba4ac-7726-49bb-aeb6-060946fac737] Running
	I0717 01:33:14.429492   46933 system_pods.go:74] duration metric: took 10.379281ms to wait for pod list to return data ...
	I0717 01:33:14.429498   46933 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:33:14.433013   46933 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:33:14.433037   46933 node_conditions.go:123] node cpu capacity is 2
	I0717 01:33:14.433056   46933 node_conditions.go:105] duration metric: took 3.545961ms to run NodePressure ...
	I0717 01:33:14.433073   46933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:33:14.707939   46933 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:33:14.719614   46933 kubeadm.go:739] kubelet initialised
	I0717 01:33:14.719635   46933 kubeadm.go:740] duration metric: took 11.672063ms waiting for restarted kubelet to initialise ...
	I0717 01:33:14.719642   46933 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:14.726912   46933 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9l7n2" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:14.732697   46933 pod_ready.go:97] node "test-preload-055392" hosting pod "coredns-6d4b75cb6d-9l7n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:14.732718   46933 pod_ready.go:81] duration metric: took 5.785377ms for pod "coredns-6d4b75cb6d-9l7n2" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:14.732726   46933 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-055392" hosting pod "coredns-6d4b75cb6d-9l7n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:14.732732   46933 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-sl9mg" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:14.739422   46933 pod_ready.go:97] node "test-preload-055392" hosting pod "coredns-6d4b75cb6d-sl9mg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:14.739440   46933 pod_ready.go:81] duration metric: took 6.694619ms for pod "coredns-6d4b75cb6d-sl9mg" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:14.739448   46933 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-055392" hosting pod "coredns-6d4b75cb6d-sl9mg" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:14.739453   46933 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:14.743878   46933 pod_ready.go:97] node "test-preload-055392" hosting pod "etcd-test-preload-055392" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:14.743896   46933 pod_ready.go:81] duration metric: took 4.435406ms for pod "etcd-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:14.743908   46933 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-055392" hosting pod "etcd-test-preload-055392" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:14.743916   46933 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:14.822896   46933 pod_ready.go:97] node "test-preload-055392" hosting pod "kube-apiserver-test-preload-055392" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:14.822927   46933 pod_ready.go:81] duration metric: took 79.003146ms for pod "kube-apiserver-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:14.822939   46933 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-055392" hosting pod "kube-apiserver-test-preload-055392" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:14.822947   46933 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:15.223701   46933 pod_ready.go:97] node "test-preload-055392" hosting pod "kube-controller-manager-test-preload-055392" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:15.223729   46933 pod_ready.go:81] duration metric: took 400.770723ms for pod "kube-controller-manager-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:15.223738   46933 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-055392" hosting pod "kube-controller-manager-test-preload-055392" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:15.223744   46933 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zwgsj" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:15.623711   46933 pod_ready.go:97] node "test-preload-055392" hosting pod "kube-proxy-zwgsj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:15.623740   46933 pod_ready.go:81] duration metric: took 399.987104ms for pod "kube-proxy-zwgsj" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:15.623748   46933 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-055392" hosting pod "kube-proxy-zwgsj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:15.623754   46933 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:16.022337   46933 pod_ready.go:97] node "test-preload-055392" hosting pod "kube-scheduler-test-preload-055392" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:16.022362   46933 pod_ready.go:81] duration metric: took 398.602508ms for pod "kube-scheduler-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	E0717 01:33:16.022371   46933 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-055392" hosting pod "kube-scheduler-test-preload-055392" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:16.022377   46933 pod_ready.go:38] duration metric: took 1.302728296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:16.022395   46933 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:33:16.034513   46933 ops.go:34] apiserver oom_adj: -16
	I0717 01:33:16.034530   46933 kubeadm.go:597] duration metric: took 10.064947296s to restartPrimaryControlPlane
	I0717 01:33:16.034540   46933 kubeadm.go:394] duration metric: took 10.112399599s to StartCluster
	I0717 01:33:16.034570   46933 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:16.034640   46933 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:33:16.035459   46933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:33:16.035735   46933 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.157 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:33:16.035812   46933 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:33:16.035875   46933 addons.go:69] Setting storage-provisioner=true in profile "test-preload-055392"
	I0717 01:33:16.035886   46933 config.go:182] Loaded profile config "test-preload-055392": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0717 01:33:16.035908   46933 addons.go:234] Setting addon storage-provisioner=true in "test-preload-055392"
	I0717 01:33:16.035906   46933 addons.go:69] Setting default-storageclass=true in profile "test-preload-055392"
	W0717 01:33:16.035920   46933 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:33:16.035951   46933 host.go:66] Checking if "test-preload-055392" exists ...
	I0717 01:33:16.035965   46933 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-055392"
	I0717 01:33:16.036237   46933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:33:16.036244   46933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:33:16.036277   46933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:16.036359   46933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:16.037520   46933 out.go:177] * Verifying Kubernetes components...
	I0717 01:33:16.039287   46933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:33:16.051034   46933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40985
	I0717 01:33:16.051106   46933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I0717 01:33:16.051484   46933 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:16.051511   46933 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:16.051936   46933 main.go:141] libmachine: Using API Version  1
	I0717 01:33:16.051953   46933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:16.052086   46933 main.go:141] libmachine: Using API Version  1
	I0717 01:33:16.052114   46933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:16.052273   46933 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:16.052405   46933 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:16.052443   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetState
	I0717 01:33:16.052939   46933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:33:16.052991   46933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:16.054900   46933 kapi.go:59] client config for test-preload-055392: &rest.Config{Host:"https://192.168.39.157:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/client.crt", KeyFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/profiles/test-preload-055392/client.key", CAFile:"/home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d01f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 01:33:16.055129   46933 addons.go:234] Setting addon default-storageclass=true in "test-preload-055392"
	W0717 01:33:16.055144   46933 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:33:16.055171   46933 host.go:66] Checking if "test-preload-055392" exists ...
	I0717 01:33:16.055415   46933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:33:16.055451   46933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:16.067954   46933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I0717 01:33:16.068377   46933 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:16.068947   46933 main.go:141] libmachine: Using API Version  1
	I0717 01:33:16.068974   46933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:16.069294   46933 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:16.069478   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetState
	I0717 01:33:16.069702   46933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0717 01:33:16.070130   46933 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:16.070644   46933 main.go:141] libmachine: Using API Version  1
	I0717 01:33:16.070667   46933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:16.071034   46933 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:16.071112   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:33:16.071484   46933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:33:16.071525   46933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:33:16.073190   46933 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:33:16.074685   46933 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:33:16.074705   46933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:33:16.074723   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:33:16.077776   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:33:16.078227   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:33:16.078263   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:33:16.078383   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:33:16.078571   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:33:16.078736   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:33:16.078883   46933 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/test-preload-055392/id_rsa Username:docker}
	I0717 01:33:16.086883   46933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45543
	I0717 01:33:16.087283   46933 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:33:16.087741   46933 main.go:141] libmachine: Using API Version  1
	I0717 01:33:16.087757   46933 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:33:16.088104   46933 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:33:16.088260   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetState
	I0717 01:33:16.089830   46933 main.go:141] libmachine: (test-preload-055392) Calling .DriverName
	I0717 01:33:16.090042   46933 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:33:16.090059   46933 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:33:16.090088   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHHostname
	I0717 01:33:16.092637   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:33:16.093098   46933 main.go:141] libmachine: (test-preload-055392) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:c5:d0", ip: ""} in network mk-test-preload-055392: {Iface:virbr1 ExpiryTime:2024-07-17 02:32:42 +0000 UTC Type:0 Mac:52:54:00:0a:c5:d0 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-055392 Clientid:01:52:54:00:0a:c5:d0}
	I0717 01:33:16.093127   46933 main.go:141] libmachine: (test-preload-055392) DBG | domain test-preload-055392 has defined IP address 192.168.39.157 and MAC address 52:54:00:0a:c5:d0 in network mk-test-preload-055392
	I0717 01:33:16.093283   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHPort
	I0717 01:33:16.093462   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHKeyPath
	I0717 01:33:16.093637   46933 main.go:141] libmachine: (test-preload-055392) Calling .GetSSHUsername
	I0717 01:33:16.093795   46933 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/test-preload-055392/id_rsa Username:docker}
	I0717 01:33:16.214085   46933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:33:16.233480   46933 node_ready.go:35] waiting up to 6m0s for node "test-preload-055392" to be "Ready" ...
	I0717 01:33:16.297555   46933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:33:16.300298   46933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:33:17.238105   46933 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:17.238128   46933 main.go:141] libmachine: (test-preload-055392) Calling .Close
	I0717 01:33:17.238170   46933 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:17.238193   46933 main.go:141] libmachine: (test-preload-055392) Calling .Close
	I0717 01:33:17.238401   46933 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:17.238418   46933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:17.238428   46933 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:17.238436   46933 main.go:141] libmachine: (test-preload-055392) Calling .Close
	I0717 01:33:17.238473   46933 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:17.238486   46933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:17.238495   46933 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:17.238500   46933 main.go:141] libmachine: (test-preload-055392) DBG | Closing plugin on server side
	I0717 01:33:17.238503   46933 main.go:141] libmachine: (test-preload-055392) Calling .Close
	I0717 01:33:17.238602   46933 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:17.238611   46933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:17.238616   46933 main.go:141] libmachine: (test-preload-055392) DBG | Closing plugin on server side
	I0717 01:33:17.238803   46933 main.go:141] libmachine: (test-preload-055392) DBG | Closing plugin on server side
	I0717 01:33:17.238814   46933 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:17.238838   46933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:17.248318   46933 main.go:141] libmachine: Making call to close driver server
	I0717 01:33:17.248340   46933 main.go:141] libmachine: (test-preload-055392) Calling .Close
	I0717 01:33:17.248589   46933 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:33:17.248612   46933 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:33:17.251150   46933 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 01:33:17.252549   46933 addons.go:510] duration metric: took 1.216745s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 01:33:18.238062   46933 node_ready.go:53] node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:20.737918   46933 node_ready.go:53] node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:23.238273   46933 node_ready.go:53] node "test-preload-055392" has status "Ready":"False"
	I0717 01:33:23.737994   46933 node_ready.go:49] node "test-preload-055392" has status "Ready":"True"
	I0717 01:33:23.738023   46933 node_ready.go:38] duration metric: took 7.504506985s for node "test-preload-055392" to be "Ready" ...
	I0717 01:33:23.738032   46933 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:23.742709   46933 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9l7n2" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:23.747343   46933 pod_ready.go:92] pod "coredns-6d4b75cb6d-9l7n2" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:23.747369   46933 pod_ready.go:81] duration metric: took 4.634396ms for pod "coredns-6d4b75cb6d-9l7n2" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:23.747380   46933 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:23.751212   46933 pod_ready.go:92] pod "etcd-test-preload-055392" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:23.751233   46933 pod_ready.go:81] duration metric: took 3.845478ms for pod "etcd-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:23.751243   46933 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:23.755598   46933 pod_ready.go:92] pod "kube-apiserver-test-preload-055392" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:23.755624   46933 pod_ready.go:81] duration metric: took 4.373367ms for pod "kube-apiserver-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:23.755635   46933 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:24.763645   46933 pod_ready.go:92] pod "kube-controller-manager-test-preload-055392" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:24.763689   46933 pod_ready.go:81] duration metric: took 1.008045398s for pod "kube-controller-manager-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:24.763709   46933 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zwgsj" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:24.939305   46933 pod_ready.go:92] pod "kube-proxy-zwgsj" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:24.939330   46933 pod_ready.go:81] duration metric: took 175.613945ms for pod "kube-proxy-zwgsj" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:24.939338   46933 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:25.344830   46933 pod_ready.go:92] pod "kube-scheduler-test-preload-055392" in "kube-system" namespace has status "Ready":"True"
	I0717 01:33:25.344851   46933 pod_ready.go:81] duration metric: took 405.506207ms for pod "kube-scheduler-test-preload-055392" in "kube-system" namespace to be "Ready" ...
	I0717 01:33:25.344865   46933 pod_ready.go:38] duration metric: took 1.60682352s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:33:25.344880   46933 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:33:25.344945   46933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:25.362688   46933 api_server.go:72] duration metric: took 9.326917552s to wait for apiserver process to appear ...
	I0717 01:33:25.362713   46933 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:33:25.362740   46933 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I0717 01:33:25.368507   46933 api_server.go:279] https://192.168.39.157:8443/healthz returned 200:
	ok
	I0717 01:33:25.369591   46933 api_server.go:141] control plane version: v1.24.4
	I0717 01:33:25.369614   46933 api_server.go:131] duration metric: took 6.893517ms to wait for apiserver health ...
	I0717 01:33:25.369624   46933 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:33:25.542263   46933 system_pods.go:59] 7 kube-system pods found
	I0717 01:33:25.542286   46933 system_pods.go:61] "coredns-6d4b75cb6d-9l7n2" [45e63a88-4fa7-47f2-b15d-b519d7fe20bc] Running
	I0717 01:33:25.542291   46933 system_pods.go:61] "etcd-test-preload-055392" [13069266-220e-4d22-8e5a-06f12dab3fb2] Running
	I0717 01:33:25.542296   46933 system_pods.go:61] "kube-apiserver-test-preload-055392" [9f52d172-e8e0-4a74-9da2-e71613477e9f] Running
	I0717 01:33:25.542299   46933 system_pods.go:61] "kube-controller-manager-test-preload-055392" [d695da31-67a1-4590-9311-94bb001deee7] Running
	I0717 01:33:25.542302   46933 system_pods.go:61] "kube-proxy-zwgsj" [cb875949-29e3-4e64-9c07-2a43ec728033] Running
	I0717 01:33:25.542305   46933 system_pods.go:61] "kube-scheduler-test-preload-055392" [ac090b18-b59e-4cec-b239-5d85e4325abb] Running
	I0717 01:33:25.542308   46933 system_pods.go:61] "storage-provisioner" [543ba4ac-7726-49bb-aeb6-060946fac737] Running
	I0717 01:33:25.542313   46933 system_pods.go:74] duration metric: took 172.683867ms to wait for pod list to return data ...
	I0717 01:33:25.542320   46933 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:33:25.737268   46933 default_sa.go:45] found service account: "default"
	I0717 01:33:25.737291   46933 default_sa.go:55] duration metric: took 194.966161ms for default service account to be created ...
	I0717 01:33:25.737299   46933 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:33:25.939417   46933 system_pods.go:86] 7 kube-system pods found
	I0717 01:33:25.939441   46933 system_pods.go:89] "coredns-6d4b75cb6d-9l7n2" [45e63a88-4fa7-47f2-b15d-b519d7fe20bc] Running
	I0717 01:33:25.939446   46933 system_pods.go:89] "etcd-test-preload-055392" [13069266-220e-4d22-8e5a-06f12dab3fb2] Running
	I0717 01:33:25.939450   46933 system_pods.go:89] "kube-apiserver-test-preload-055392" [9f52d172-e8e0-4a74-9da2-e71613477e9f] Running
	I0717 01:33:25.939454   46933 system_pods.go:89] "kube-controller-manager-test-preload-055392" [d695da31-67a1-4590-9311-94bb001deee7] Running
	I0717 01:33:25.939457   46933 system_pods.go:89] "kube-proxy-zwgsj" [cb875949-29e3-4e64-9c07-2a43ec728033] Running
	I0717 01:33:25.939461   46933 system_pods.go:89] "kube-scheduler-test-preload-055392" [ac090b18-b59e-4cec-b239-5d85e4325abb] Running
	I0717 01:33:25.939464   46933 system_pods.go:89] "storage-provisioner" [543ba4ac-7726-49bb-aeb6-060946fac737] Running
	I0717 01:33:25.939470   46933 system_pods.go:126] duration metric: took 202.165474ms to wait for k8s-apps to be running ...
	I0717 01:33:25.939477   46933 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:33:25.939518   46933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:25.955502   46933 system_svc.go:56] duration metric: took 16.015616ms WaitForService to wait for kubelet
	I0717 01:33:25.955536   46933 kubeadm.go:582] duration metric: took 9.919768469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:33:25.955576   46933 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:33:26.139530   46933 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:33:26.139562   46933 node_conditions.go:123] node cpu capacity is 2
	I0717 01:33:26.139575   46933 node_conditions.go:105] duration metric: took 183.99043ms to run NodePressure ...
	I0717 01:33:26.139589   46933 start.go:241] waiting for startup goroutines ...
	I0717 01:33:26.139601   46933 start.go:246] waiting for cluster config update ...
	I0717 01:33:26.139617   46933 start.go:255] writing updated cluster config ...
	I0717 01:33:26.139967   46933 ssh_runner.go:195] Run: rm -f paused
	I0717 01:33:26.185841   46933 start.go:600] kubectl: 1.30.2, cluster: 1.24.4 (minor skew: 6)
	I0717 01:33:26.187867   46933 out.go:177] 
	W0717 01:33:26.189272   46933 out.go:239] ! /usr/local/bin/kubectl is version 1.30.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0717 01:33:26.190669   46933 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0717 01:33:26.192013   46933 out.go:177] * Done! kubectl is now configured to use "test-preload-055392" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.029440268Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180007029416587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88958cfd-3893-4cea-b4a8-6bbd8b8ce484 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.030117317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8363e07c-ad0a-4b4b-892e-04b4f8e65c2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.030167778Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8363e07c-ad0a-4b4b-892e-04b4f8e65c2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.030593790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18ce39ec39cbe36886f780916784759dc2548b080b075034b73bcdd075e53933,PodSandboxId:568b8403be0cc477ec2a0b691403555f2687d2b2195a095693eaeb473b0f98bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721180001793081751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9l7n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e63a88-4fa7-47f2-b15d-b519d7fe20bc,},Annotations:map[string]string{io.kubernetes.container.hash: 572308b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5735a8855d55fa0c0ae84bed0aa2fd64f9489471fe16a951c07a7331c235500b,PodSandboxId:81d61aa653b0f30d70556780b573952b5a174c4b8d4b2556ea1730bdf817e0f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721179994877057661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwgsj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cb875949-29e3-4e64-9c07-2a43ec728033,},Annotations:map[string]string{io.kubernetes.container.hash: bc005b11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9307150708fc8b72af6e3b257683bd36bc76a20883072b9a1631f65d090643,PodSandboxId:41add08a64209d88772a163dfac01b67e846dd1e34b5a1574144acea96ab23c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179994612222992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54
3ba4ac-7726-49bb-aeb6-060946fac737,},Annotations:map[string]string{io.kubernetes.container.hash: 9a3aafb9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d39bef602ceed15d8097a2c4aa39eb1daaba6e2dbc68fea67a9ae0fcb753e0,PodSandboxId:646dc32a078dab467e9405df8159d19ea0114dc3611db7432e1273a488f8091f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721179988089070255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b07057ad08bc3da320aaf96943d24ec6,},Anno
tations:map[string]string{io.kubernetes.container.hash: 4193f6fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5006f06d22536ba3414743f50c308f2b4bb4c6225672c34b838fc33a49561be6,PodSandboxId:c4553d110451bcac67a982091ba48fd143a6c245dbf2e9f322616461aa62bbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721179988114632795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c2fed60af575485108bd23e0ad2e7c5,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9286aa64e69345b4525101835df5b1ac65493c4e5a40e7e4acc2e5202da6f9db,PodSandboxId:91ae006d744091c80a0ecac8aac4f15e0cb325b2cf117c8007b0f5034c4bdd75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721179988042059340,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb5aad59923ba2c416de9fc77ce2ccd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac6db463,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3842658ec821761a6fc9fcda40ff8486b54b1dd0d993b400e3334e79318d0628,PodSandboxId:69fbd2000fdf1d878e180d9754288e65de37d0463e502772d93c709994d8c794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721179987986696636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f021cb781357ceac8e2b16502ef071,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8363e07c-ad0a-4b4b-892e-04b4f8e65c2d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.070351831Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a5e71bd-f95f-4d45-9534-ab85e561ee5d name=/runtime.v1.RuntimeService/Version
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.070448249Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a5e71bd-f95f-4d45-9534-ab85e561ee5d name=/runtime.v1.RuntimeService/Version
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.071907213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da66df9c-77b4-4057-b630-8dfb090323a7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.072561780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180007072527599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da66df9c-77b4-4057-b630-8dfb090323a7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.073058280Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e98ed096-9f3b-441b-9b30-6f0d929174bb name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.073117547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e98ed096-9f3b-441b-9b30-6f0d929174bb name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.073517380Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18ce39ec39cbe36886f780916784759dc2548b080b075034b73bcdd075e53933,PodSandboxId:568b8403be0cc477ec2a0b691403555f2687d2b2195a095693eaeb473b0f98bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721180001793081751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9l7n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e63a88-4fa7-47f2-b15d-b519d7fe20bc,},Annotations:map[string]string{io.kubernetes.container.hash: 572308b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5735a8855d55fa0c0ae84bed0aa2fd64f9489471fe16a951c07a7331c235500b,PodSandboxId:81d61aa653b0f30d70556780b573952b5a174c4b8d4b2556ea1730bdf817e0f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721179994877057661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwgsj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cb875949-29e3-4e64-9c07-2a43ec728033,},Annotations:map[string]string{io.kubernetes.container.hash: bc005b11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9307150708fc8b72af6e3b257683bd36bc76a20883072b9a1631f65d090643,PodSandboxId:41add08a64209d88772a163dfac01b67e846dd1e34b5a1574144acea96ab23c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179994612222992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54
3ba4ac-7726-49bb-aeb6-060946fac737,},Annotations:map[string]string{io.kubernetes.container.hash: 9a3aafb9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d39bef602ceed15d8097a2c4aa39eb1daaba6e2dbc68fea67a9ae0fcb753e0,PodSandboxId:646dc32a078dab467e9405df8159d19ea0114dc3611db7432e1273a488f8091f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721179988089070255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b07057ad08bc3da320aaf96943d24ec6,},Anno
tations:map[string]string{io.kubernetes.container.hash: 4193f6fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5006f06d22536ba3414743f50c308f2b4bb4c6225672c34b838fc33a49561be6,PodSandboxId:c4553d110451bcac67a982091ba48fd143a6c245dbf2e9f322616461aa62bbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721179988114632795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c2fed60af575485108bd23e0ad2e7c5,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9286aa64e69345b4525101835df5b1ac65493c4e5a40e7e4acc2e5202da6f9db,PodSandboxId:91ae006d744091c80a0ecac8aac4f15e0cb325b2cf117c8007b0f5034c4bdd75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721179988042059340,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb5aad59923ba2c416de9fc77ce2ccd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac6db463,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3842658ec821761a6fc9fcda40ff8486b54b1dd0d993b400e3334e79318d0628,PodSandboxId:69fbd2000fdf1d878e180d9754288e65de37d0463e502772d93c709994d8c794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721179987986696636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f021cb781357ceac8e2b16502ef071,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e98ed096-9f3b-441b-9b30-6f0d929174bb name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.116871801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed0ef27c-6cb7-4a5b-a096-9932239f8e37 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.116990527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed0ef27c-6cb7-4a5b-a096-9932239f8e37 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.118493288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8f5b986-73d9-4005-a80e-ffbe9e0f5aa9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.119052501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180007119029962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8f5b986-73d9-4005-a80e-ffbe9e0f5aa9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.119842898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68376db7-7edd-4041-a439-cd2a3d46abc8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.119927878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68376db7-7edd-4041-a439-cd2a3d46abc8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.120157106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18ce39ec39cbe36886f780916784759dc2548b080b075034b73bcdd075e53933,PodSandboxId:568b8403be0cc477ec2a0b691403555f2687d2b2195a095693eaeb473b0f98bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721180001793081751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9l7n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e63a88-4fa7-47f2-b15d-b519d7fe20bc,},Annotations:map[string]string{io.kubernetes.container.hash: 572308b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5735a8855d55fa0c0ae84bed0aa2fd64f9489471fe16a951c07a7331c235500b,PodSandboxId:81d61aa653b0f30d70556780b573952b5a174c4b8d4b2556ea1730bdf817e0f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721179994877057661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwgsj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cb875949-29e3-4e64-9c07-2a43ec728033,},Annotations:map[string]string{io.kubernetes.container.hash: bc005b11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9307150708fc8b72af6e3b257683bd36bc76a20883072b9a1631f65d090643,PodSandboxId:41add08a64209d88772a163dfac01b67e846dd1e34b5a1574144acea96ab23c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179994612222992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54
3ba4ac-7726-49bb-aeb6-060946fac737,},Annotations:map[string]string{io.kubernetes.container.hash: 9a3aafb9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d39bef602ceed15d8097a2c4aa39eb1daaba6e2dbc68fea67a9ae0fcb753e0,PodSandboxId:646dc32a078dab467e9405df8159d19ea0114dc3611db7432e1273a488f8091f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721179988089070255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b07057ad08bc3da320aaf96943d24ec6,},Anno
tations:map[string]string{io.kubernetes.container.hash: 4193f6fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5006f06d22536ba3414743f50c308f2b4bb4c6225672c34b838fc33a49561be6,PodSandboxId:c4553d110451bcac67a982091ba48fd143a6c245dbf2e9f322616461aa62bbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721179988114632795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c2fed60af575485108bd23e0ad2e7c5,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9286aa64e69345b4525101835df5b1ac65493c4e5a40e7e4acc2e5202da6f9db,PodSandboxId:91ae006d744091c80a0ecac8aac4f15e0cb325b2cf117c8007b0f5034c4bdd75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721179988042059340,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb5aad59923ba2c416de9fc77ce2ccd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac6db463,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3842658ec821761a6fc9fcda40ff8486b54b1dd0d993b400e3334e79318d0628,PodSandboxId:69fbd2000fdf1d878e180d9754288e65de37d0463e502772d93c709994d8c794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721179987986696636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f021cb781357ceac8e2b16502ef071,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68376db7-7edd-4041-a439-cd2a3d46abc8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.162660366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9581bec-7e00-43a2-a60e-3bbdcd89f6ac name=/runtime.v1.RuntimeService/Version
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.162828212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9581bec-7e00-43a2-a60e-3bbdcd89f6ac name=/runtime.v1.RuntimeService/Version
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.164395579Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=714ca81b-2d6b-460b-afea-bc577fef24c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.165059987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180007165037960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=714ca81b-2d6b-460b-afea-bc577fef24c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.165687112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=838f8868-351f-4181-a116-89db619cc556 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.166330992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=838f8868-351f-4181-a116-89db619cc556 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:33:27 test-preload-055392 crio[712]: time="2024-07-17 01:33:27.166517638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18ce39ec39cbe36886f780916784759dc2548b080b075034b73bcdd075e53933,PodSandboxId:568b8403be0cc477ec2a0b691403555f2687d2b2195a095693eaeb473b0f98bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721180001793081751,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9l7n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e63a88-4fa7-47f2-b15d-b519d7fe20bc,},Annotations:map[string]string{io.kubernetes.container.hash: 572308b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5735a8855d55fa0c0ae84bed0aa2fd64f9489471fe16a951c07a7331c235500b,PodSandboxId:81d61aa653b0f30d70556780b573952b5a174c4b8d4b2556ea1730bdf817e0f2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721179994877057661,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zwgsj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cb875949-29e3-4e64-9c07-2a43ec728033,},Annotations:map[string]string{io.kubernetes.container.hash: bc005b11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f9307150708fc8b72af6e3b257683bd36bc76a20883072b9a1631f65d090643,PodSandboxId:41add08a64209d88772a163dfac01b67e846dd1e34b5a1574144acea96ab23c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721179994612222992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54
3ba4ac-7726-49bb-aeb6-060946fac737,},Annotations:map[string]string{io.kubernetes.container.hash: 9a3aafb9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9d39bef602ceed15d8097a2c4aa39eb1daaba6e2dbc68fea67a9ae0fcb753e0,PodSandboxId:646dc32a078dab467e9405df8159d19ea0114dc3611db7432e1273a488f8091f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721179988089070255,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b07057ad08bc3da320aaf96943d24ec6,},Anno
tations:map[string]string{io.kubernetes.container.hash: 4193f6fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5006f06d22536ba3414743f50c308f2b4bb4c6225672c34b838fc33a49561be6,PodSandboxId:c4553d110451bcac67a982091ba48fd143a6c245dbf2e9f322616461aa62bbb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721179988114632795,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c2fed60af575485108bd23e0ad2e7c5,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9286aa64e69345b4525101835df5b1ac65493c4e5a40e7e4acc2e5202da6f9db,PodSandboxId:91ae006d744091c80a0ecac8aac4f15e0cb325b2cf117c8007b0f5034c4bdd75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721179988042059340,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb5aad59923ba2c416de9fc77ce2ccd,},Annotations:map[string]str
ing{io.kubernetes.container.hash: ac6db463,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3842658ec821761a6fc9fcda40ff8486b54b1dd0d993b400e3334e79318d0628,PodSandboxId:69fbd2000fdf1d878e180d9754288e65de37d0463e502772d93c709994d8c794,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721179987986696636,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-055392,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9f021cb781357ceac8e2b16502ef071,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=838f8868-351f-4181-a116-89db619cc556 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	18ce39ec39cbe       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   568b8403be0cc       coredns-6d4b75cb6d-9l7n2
	5735a8855d55f       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   81d61aa653b0f       kube-proxy-zwgsj
	0f9307150708f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   41add08a64209       storage-provisioner
	5006f06d22536       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   c4553d110451b       kube-scheduler-test-preload-055392
	f9d39bef602ce       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   646dc32a078da       etcd-test-preload-055392
	9286aa64e6934       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   91ae006d74409       kube-apiserver-test-preload-055392
	3842658ec8217       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   69fbd2000fdf1       kube-controller-manager-test-preload-055392
	
	
	==> coredns [18ce39ec39cbe36886f780916784759dc2548b080b075034b73bcdd075e53933] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:37083 - 23707 "HINFO IN 1115286767104487630.2897917540629318145. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007490138s
	
	
	==> describe nodes <==
	Name:               test-preload-055392
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-055392
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=test-preload-055392
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_31_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:31:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-055392
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:33:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:33:23 +0000   Wed, 17 Jul 2024 01:31:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:33:23 +0000   Wed, 17 Jul 2024 01:31:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:33:23 +0000   Wed, 17 Jul 2024 01:31:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:33:23 +0000   Wed, 17 Jul 2024 01:33:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    test-preload-055392
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0feadf00c1e44220b7c54387b0cf5bbf
	  System UUID:                0feadf00-c1e4-4220-b7c5-4387b0cf5bbf
	  Boot ID:                    0f4fa0b3-4d6c-435e-a3ba-9e4aea1aa7df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9l7n2                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     90s
	  kube-system                 etcd-test-preload-055392                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kube-apiserver-test-preload-055392             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-controller-manager-test-preload-055392    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-zwgsj                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-scheduler-test-preload-055392             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  Starting                 88s                kube-proxy       
	  Normal  Starting                 103s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  103s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  103s               kubelet          Node test-preload-055392 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s               kubelet          Node test-preload-055392 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s               kubelet          Node test-preload-055392 status is now: NodeHasSufficientPID
	  Normal  NodeReady                92s                kubelet          Node test-preload-055392 status is now: NodeReady
	  Normal  RegisteredNode           91s                node-controller  Node test-preload-055392 event: Registered Node test-preload-055392 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-055392 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-055392 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-055392 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node test-preload-055392 event: Registered Node test-preload-055392 in Controller
	
	
	==> dmesg <==
	[Jul17 01:32] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050013] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039151] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.490015] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.286460] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.605959] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.221541] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.065797] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056836] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.160988] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.131931] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.286895] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[Jul17 01:33] systemd-fstab-generator[974]: Ignoring "noauto" option for root device
	[  +0.060092] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.648319] systemd-fstab-generator[1102]: Ignoring "noauto" option for root device
	[  +5.592213] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.438501] systemd-fstab-generator[1736]: Ignoring "noauto" option for root device
	[  +5.501251] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [f9d39bef602ceed15d8097a2c4aa39eb1daaba6e2dbc68fea67a9ae0fcb753e0] <==
	{"level":"info","ts":"2024-07-17T01:33:08.608Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"678d6d65e7bf3019","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-17T01:33:08.608Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-17T01:33:08.611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 switched to configuration voters=(7461740442069970969)"}
	{"level":"info","ts":"2024-07-17T01:33:08.612Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"56d140a2e4073e49","local-member-id":"678d6d65e7bf3019","added-peer-id":"678d6d65e7bf3019","added-peer-peer-urls":["https://192.168.39.157:2380"]}
	{"level":"info","ts":"2024-07-17T01:33:08.613Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"56d140a2e4073e49","local-member-id":"678d6d65e7bf3019","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:33:08.614Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:33:08.633Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:33:08.636Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.157:2380"}
	{"level":"info","ts":"2024-07-17T01:33:08.636Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.157:2380"}
	{"level":"info","ts":"2024-07-17T01:33:08.637Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"678d6d65e7bf3019","initial-advertise-peer-urls":["https://192.168.39.157:2380"],"listen-peer-urls":["https://192.168.39.157:2380"],"advertise-client-urls":["https://192.168.39.157:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.157:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:33:08.637Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:33:10.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:33:10.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:33:10.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 received MsgPreVoteResp from 678d6d65e7bf3019 at term 2"}
	{"level":"info","ts":"2024-07-17T01:33:10.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:33:10.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 received MsgVoteResp from 678d6d65e7bf3019 at term 3"}
	{"level":"info","ts":"2024-07-17T01:33:10.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:33:10.444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 678d6d65e7bf3019 elected leader 678d6d65e7bf3019 at term 3"}
	{"level":"info","ts":"2024-07-17T01:33:10.446Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"678d6d65e7bf3019","local-member-attributes":"{Name:test-preload-055392 ClientURLs:[https://192.168.39.157:2379]}","request-path":"/0/members/678d6d65e7bf3019/attributes","cluster-id":"56d140a2e4073e49","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:33:10.446Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:33:10.448Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:33:10.448Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:33:10.448Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:33:10.449Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:33:10.449Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.157:2379"}
	
	
	==> kernel <==
	 01:33:27 up 0 min,  0 users,  load average: 0.61, 0.17, 0.06
	Linux test-preload-055392 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9286aa64e69345b4525101835df5b1ac65493c4e5a40e7e4acc2e5202da6f9db] <==
	I0717 01:33:12.855348       1 controller.go:85] Starting OpenAPI V3 controller
	I0717 01:33:12.855395       1 naming_controller.go:291] Starting NamingConditionController
	I0717 01:33:12.855893       1 establishing_controller.go:76] Starting EstablishingController
	I0717 01:33:12.855955       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0717 01:33:12.855993       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0717 01:33:12.856028       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0717 01:33:12.918372       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0717 01:33:12.919071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 01:33:12.933788       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 01:33:12.934478       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:33:12.953474       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0717 01:33:12.954568       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0717 01:33:12.962316       1 apf_controller.go:322] Running API Priority and Fairness config worker
	E0717 01:33:12.966102       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0717 01:33:13.015381       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:33:13.506348       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 01:33:13.823925       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:33:14.579136       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0717 01:33:14.614487       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0717 01:33:14.652404       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0717 01:33:14.671873       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:33:14.683766       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:33:15.185790       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0717 01:33:25.368863       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 01:33:25.548110       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3842658ec821761a6fc9fcda40ff8486b54b1dd0d993b400e3334e79318d0628] <==
	I0717 01:33:25.370149       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0717 01:33:25.375153       1 shared_informer.go:262] Caches are synced for attach detach
	I0717 01:33:25.377419       1 shared_informer.go:262] Caches are synced for PVC protection
	I0717 01:33:25.380265       1 shared_informer.go:262] Caches are synced for deployment
	I0717 01:33:25.383073       1 shared_informer.go:262] Caches are synced for cronjob
	I0717 01:33:25.395617       1 shared_informer.go:262] Caches are synced for GC
	I0717 01:33:25.399807       1 shared_informer.go:262] Caches are synced for node
	I0717 01:33:25.399848       1 range_allocator.go:173] Starting range CIDR allocator
	I0717 01:33:25.399853       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0717 01:33:25.399860       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0717 01:33:25.401771       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0717 01:33:25.405429       1 shared_informer.go:262] Caches are synced for daemon sets
	I0717 01:33:25.408624       1 shared_informer.go:262] Caches are synced for ephemeral
	I0717 01:33:25.425845       1 shared_informer.go:262] Caches are synced for persistent volume
	I0717 01:33:25.427343       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0717 01:33:25.538370       1 shared_informer.go:262] Caches are synced for endpoint
	I0717 01:33:25.540785       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0717 01:33:25.569051       1 shared_informer.go:262] Caches are synced for disruption
	I0717 01:33:25.569337       1 disruption.go:371] Sending events to api server.
	I0717 01:33:25.571375       1 shared_informer.go:262] Caches are synced for stateful set
	I0717 01:33:25.593946       1 shared_informer.go:262] Caches are synced for resource quota
	I0717 01:33:25.625016       1 shared_informer.go:262] Caches are synced for resource quota
	I0717 01:33:26.063553       1 shared_informer.go:262] Caches are synced for garbage collector
	I0717 01:33:26.065681       1 shared_informer.go:262] Caches are synced for garbage collector
	I0717 01:33:26.065732       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [5735a8855d55fa0c0ae84bed0aa2fd64f9489471fe16a951c07a7331c235500b] <==
	I0717 01:33:15.102557       1 node.go:163] Successfully retrieved node IP: 192.168.39.157
	I0717 01:33:15.102667       1 server_others.go:138] "Detected node IP" address="192.168.39.157"
	I0717 01:33:15.102746       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0717 01:33:15.174476       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0717 01:33:15.174514       1 server_others.go:206] "Using iptables Proxier"
	I0717 01:33:15.174956       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0717 01:33:15.175630       1 server.go:661] "Version info" version="v1.24.4"
	I0717 01:33:15.175692       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:33:15.177321       1 config.go:317] "Starting service config controller"
	I0717 01:33:15.177616       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0717 01:33:15.177702       1 config.go:226] "Starting endpoint slice config controller"
	I0717 01:33:15.177723       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0717 01:33:15.181807       1 config.go:444] "Starting node config controller"
	I0717 01:33:15.181872       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0717 01:33:15.278480       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0717 01:33:15.278534       1 shared_informer.go:262] Caches are synced for service config
	I0717 01:33:15.282728       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [5006f06d22536ba3414743f50c308f2b4bb4c6225672c34b838fc33a49561be6] <==
	I0717 01:33:09.473146       1 serving.go:348] Generated self-signed cert in-memory
	W0717 01:33:12.891338       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:33:12.891688       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:33:12.891784       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:33:12.891808       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:33:12.967123       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0717 01:33:12.967160       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:33:12.970212       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 01:33:12.970437       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:33:12.970473       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:33:12.970507       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:33:13.070624       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.332611    1109 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb875949-29e3-4e64-9c07-2a43ec728033-kube-proxy\") pod \"kube-proxy-zwgsj\" (UID: \"cb875949-29e3-4e64-9c07-2a43ec728033\") " pod="kube-system/kube-proxy-zwgsj"
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.332709    1109 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrcm2\" (UniqueName: \"kubernetes.io/projected/cb875949-29e3-4e64-9c07-2a43ec728033-kube-api-access-wrcm2\") pod \"kube-proxy-zwgsj\" (UID: \"cb875949-29e3-4e64-9c07-2a43ec728033\") " pod="kube-system/kube-proxy-zwgsj"
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.332809    1109 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45e63a88-4fa7-47f2-b15d-b519d7fe20bc-config-volume\") pod \"coredns-6d4b75cb6d-9l7n2\" (UID: \"45e63a88-4fa7-47f2-b15d-b519d7fe20bc\") " pod="kube-system/coredns-6d4b75cb6d-9l7n2"
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.332911    1109 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb875949-29e3-4e64-9c07-2a43ec728033-xtables-lock\") pod \"kube-proxy-zwgsj\" (UID: \"cb875949-29e3-4e64-9c07-2a43ec728033\") " pod="kube-system/kube-proxy-zwgsj"
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.333013    1109 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb875949-29e3-4e64-9c07-2a43ec728033-lib-modules\") pod \"kube-proxy-zwgsj\" (UID: \"cb875949-29e3-4e64-9c07-2a43ec728033\") " pod="kube-system/kube-proxy-zwgsj"
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.333140    1109 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/543ba4ac-7726-49bb-aeb6-060946fac737-tmp\") pod \"storage-provisioner\" (UID: \"543ba4ac-7726-49bb-aeb6-060946fac737\") " pod="kube-system/storage-provisioner"
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.333296    1109 reconciler.go:159] "Reconciler: start to sync state"
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.799598    1109 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bczhx\" (UniqueName: \"kubernetes.io/projected/ae49babf-6cf8-45c8-ab2a-57365b4d0507-kube-api-access-bczhx\") pod \"ae49babf-6cf8-45c8-ab2a-57365b4d0507\" (UID: \"ae49babf-6cf8-45c8-ab2a-57365b4d0507\") "
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.799995    1109 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae49babf-6cf8-45c8-ab2a-57365b4d0507-config-volume\") pod \"ae49babf-6cf8-45c8-ab2a-57365b4d0507\" (UID: \"ae49babf-6cf8-45c8-ab2a-57365b4d0507\") "
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: W0717 01:33:13.801534    1109 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/ae49babf-6cf8-45c8-ab2a-57365b4d0507/volumes/kubernetes.io~projected/kube-api-access-bczhx: clearQuota called, but quotas disabled
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: W0717 01:33:13.801956    1109 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/ae49babf-6cf8-45c8-ab2a-57365b4d0507/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.802168    1109 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae49babf-6cf8-45c8-ab2a-57365b4d0507-kube-api-access-bczhx" (OuterVolumeSpecName: "kube-api-access-bczhx") pod "ae49babf-6cf8-45c8-ab2a-57365b4d0507" (UID: "ae49babf-6cf8-45c8-ab2a-57365b4d0507"). InnerVolumeSpecName "kube-api-access-bczhx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: E0717 01:33:13.802807    1109 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: E0717 01:33:13.802879    1109 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/45e63a88-4fa7-47f2-b15d-b519d7fe20bc-config-volume podName:45e63a88-4fa7-47f2-b15d-b519d7fe20bc nodeName:}" failed. No retries permitted until 2024-07-17 01:33:14.302848694 +0000 UTC m=+7.154286448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/45e63a88-4fa7-47f2-b15d-b519d7fe20bc-config-volume") pod "coredns-6d4b75cb6d-9l7n2" (UID: "45e63a88-4fa7-47f2-b15d-b519d7fe20bc") : object "kube-system"/"coredns" not registered
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.802977    1109 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae49babf-6cf8-45c8-ab2a-57365b4d0507-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae49babf-6cf8-45c8-ab2a-57365b4d0507" (UID: "ae49babf-6cf8-45c8-ab2a-57365b4d0507"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.901282    1109 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae49babf-6cf8-45c8-ab2a-57365b4d0507-config-volume\") on node \"test-preload-055392\" DevicePath \"\""
	Jul 17 01:33:13 test-preload-055392 kubelet[1109]: I0717 01:33:13.901327    1109 reconciler.go:384] "Volume detached for volume \"kube-api-access-bczhx\" (UniqueName: \"kubernetes.io/projected/ae49babf-6cf8-45c8-ab2a-57365b4d0507-kube-api-access-bczhx\") on node \"test-preload-055392\" DevicePath \"\""
	Jul 17 01:33:14 test-preload-055392 kubelet[1109]: E0717 01:33:14.304119    1109 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 17 01:33:14 test-preload-055392 kubelet[1109]: E0717 01:33:14.304192    1109 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/45e63a88-4fa7-47f2-b15d-b519d7fe20bc-config-volume podName:45e63a88-4fa7-47f2-b15d-b519d7fe20bc nodeName:}" failed. No retries permitted until 2024-07-17 01:33:15.304176695 +0000 UTC m=+8.155614437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/45e63a88-4fa7-47f2-b15d-b519d7fe20bc-config-volume") pod "coredns-6d4b75cb6d-9l7n2" (UID: "45e63a88-4fa7-47f2-b15d-b519d7fe20bc") : object "kube-system"/"coredns" not registered
	Jul 17 01:33:15 test-preload-055392 kubelet[1109]: E0717 01:33:15.311988    1109 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 17 01:33:15 test-preload-055392 kubelet[1109]: E0717 01:33:15.312073    1109 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/45e63a88-4fa7-47f2-b15d-b519d7fe20bc-config-volume podName:45e63a88-4fa7-47f2-b15d-b519d7fe20bc nodeName:}" failed. No retries permitted until 2024-07-17 01:33:17.312057093 +0000 UTC m=+10.163494837 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/45e63a88-4fa7-47f2-b15d-b519d7fe20bc-config-volume") pod "coredns-6d4b75cb6d-9l7n2" (UID: "45e63a88-4fa7-47f2-b15d-b519d7fe20bc") : object "kube-system"/"coredns" not registered
	Jul 17 01:33:15 test-preload-055392 kubelet[1109]: E0717 01:33:15.382574    1109 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9l7n2" podUID=45e63a88-4fa7-47f2-b15d-b519d7fe20bc
	Jul 17 01:33:17 test-preload-055392 kubelet[1109]: E0717 01:33:17.325689    1109 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 17 01:33:17 test-preload-055392 kubelet[1109]: E0717 01:33:17.328447    1109 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/45e63a88-4fa7-47f2-b15d-b519d7fe20bc-config-volume podName:45e63a88-4fa7-47f2-b15d-b519d7fe20bc nodeName:}" failed. No retries permitted until 2024-07-17 01:33:21.328421011 +0000 UTC m=+14.179858770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/45e63a88-4fa7-47f2-b15d-b519d7fe20bc-config-volume") pod "coredns-6d4b75cb6d-9l7n2" (UID: "45e63a88-4fa7-47f2-b15d-b519d7fe20bc") : object "kube-system"/"coredns" not registered
	Jul 17 01:33:17 test-preload-055392 kubelet[1109]: I0717 01:33:17.389449    1109 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ae49babf-6cf8-45c8-ab2a-57365b4d0507 path="/var/lib/kubelet/pods/ae49babf-6cf8-45c8-ab2a-57365b4d0507/volumes"
	
	
	==> storage-provisioner [0f9307150708fc8b72af6e3b257683bd36bc76a20883072b9a1631f65d090643] <==
	I0717 01:33:14.736689       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-055392 -n test-preload-055392
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-055392 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-055392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-055392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-055392: (1.086937569s)
--- FAIL: TestPreload (298.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-572332 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-572332 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m34.899717687s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-572332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-572332" primary control-plane node in "kubernetes-upgrade-572332" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:38:10.518851   53310 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:38:10.519081   53310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:38:10.519090   53310 out.go:304] Setting ErrFile to fd 2...
	I0717 01:38:10.519094   53310 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:38:10.519275   53310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:38:10.519870   53310 out.go:298] Setting JSON to false
	I0717 01:38:10.520730   53310 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4832,"bootTime":1721175458,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:38:10.520780   53310 start.go:139] virtualization: kvm guest
	I0717 01:38:10.523089   53310 out.go:177] * [kubernetes-upgrade-572332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:38:10.524488   53310 notify.go:220] Checking for updates...
	I0717 01:38:10.524497   53310 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:38:10.525818   53310 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:38:10.527121   53310 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:38:10.528294   53310 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:38:10.529501   53310 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:38:10.530745   53310 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:38:10.532457   53310 config.go:182] Loaded profile config "NoKubernetes-130517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0717 01:38:10.532543   53310 config.go:182] Loaded profile config "cert-expiration-733994": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:38:10.532664   53310 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:38:10.569228   53310 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:38:10.570671   53310 start.go:297] selected driver: kvm2
	I0717 01:38:10.570685   53310 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:38:10.570696   53310 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:38:10.571369   53310 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:38:10.571443   53310 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:38:10.587492   53310 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:38:10.587530   53310 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 01:38:10.587714   53310 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 01:38:10.587767   53310 cni.go:84] Creating CNI manager for ""
	I0717 01:38:10.587779   53310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:38:10.587793   53310 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 01:38:10.587871   53310 start.go:340] cluster config:
	{Name:kubernetes-upgrade-572332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-572332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:38:10.588009   53310 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:38:10.590980   53310 out.go:177] * Starting "kubernetes-upgrade-572332" primary control-plane node in "kubernetes-upgrade-572332" cluster
	I0717 01:38:10.592193   53310 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:38:10.592232   53310 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:38:10.592241   53310 cache.go:56] Caching tarball of preloaded images
	I0717 01:38:10.592318   53310 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:38:10.592329   53310 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:38:10.592432   53310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/config.json ...
	I0717 01:38:10.592457   53310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/config.json: {Name:mkf9571f48a3465d0baba9a9b82f6e82c87b18b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:10.592586   53310 start.go:360] acquireMachinesLock for kubernetes-upgrade-572332: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:38:15.307561   53310 start.go:364] duration metric: took 4.714949421s to acquireMachinesLock for "kubernetes-upgrade-572332"
	I0717 01:38:15.307648   53310 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-572332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-572332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:38:15.307739   53310 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:38:15.309975   53310 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 01:38:15.310147   53310 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:38:15.310193   53310 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:38:15.327272   53310 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45129
	I0717 01:38:15.327680   53310 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:38:15.328382   53310 main.go:141] libmachine: Using API Version  1
	I0717 01:38:15.328403   53310 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:38:15.328760   53310 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:38:15.328958   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetMachineName
	I0717 01:38:15.329130   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:38:15.329304   53310 start.go:159] libmachine.API.Create for "kubernetes-upgrade-572332" (driver="kvm2")
	I0717 01:38:15.329333   53310 client.go:168] LocalClient.Create starting
	I0717 01:38:15.329364   53310 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 01:38:15.329424   53310 main.go:141] libmachine: Decoding PEM data...
	I0717 01:38:15.329448   53310 main.go:141] libmachine: Parsing certificate...
	I0717 01:38:15.329519   53310 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 01:38:15.329547   53310 main.go:141] libmachine: Decoding PEM data...
	I0717 01:38:15.329564   53310 main.go:141] libmachine: Parsing certificate...
	I0717 01:38:15.329588   53310 main.go:141] libmachine: Running pre-create checks...
	I0717 01:38:15.329609   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .PreCreateCheck
	I0717 01:38:15.330017   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetConfigRaw
	I0717 01:38:15.330446   53310 main.go:141] libmachine: Creating machine...
	I0717 01:38:15.330466   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .Create
	I0717 01:38:15.330605   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Creating KVM machine...
	I0717 01:38:15.331757   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found existing default KVM network
	I0717 01:38:15.332730   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:15.332583   53376 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cd:d9:cd} reservation:<nil>}
	I0717 01:38:15.334804   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:15.334728   53376 network.go:209] skipping subnet 192.168.50.0/24 that is reserved: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 01:38:15.335611   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:15.335460   53376 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:e2:2f:76} reservation:<nil>}
	I0717 01:38:15.336379   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:15.336299   53376 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001a79b0}
	I0717 01:38:15.336423   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | created network xml: 
	I0717 01:38:15.336433   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | <network>
	I0717 01:38:15.336444   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG |   <name>mk-kubernetes-upgrade-572332</name>
	I0717 01:38:15.336451   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG |   <dns enable='no'/>
	I0717 01:38:15.336460   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG |   
	I0717 01:38:15.336469   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0717 01:38:15.336478   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG |     <dhcp>
	I0717 01:38:15.336487   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0717 01:38:15.336493   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG |     </dhcp>
	I0717 01:38:15.336498   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG |   </ip>
	I0717 01:38:15.336503   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG |   
	I0717 01:38:15.336510   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | </network>
	I0717 01:38:15.336519   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | 
	I0717 01:38:15.342016   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | trying to create private KVM network mk-kubernetes-upgrade-572332 192.168.72.0/24...
	I0717 01:38:15.412187   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | private KVM network mk-kubernetes-upgrade-572332 192.168.72.0/24 created
	I0717 01:38:15.412222   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332 ...
	I0717 01:38:15.412241   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:15.412116   53376 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:38:15.412253   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 01:38:15.412283   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 01:38:15.633367   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:15.633245   53376 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa...
	I0717 01:38:15.807101   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:15.806968   53376 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/kubernetes-upgrade-572332.rawdisk...
	I0717 01:38:15.807124   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Writing magic tar header
	I0717 01:38:15.807140   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Writing SSH key tar header
	I0717 01:38:15.807153   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:15.807085   53376 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332 ...
	I0717 01:38:15.807203   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332
	I0717 01:38:15.807220   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332 (perms=drwx------)
	I0717 01:38:15.807231   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 01:38:15.807285   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:38:15.807312   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 01:38:15.807329   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 01:38:15.807345   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 01:38:15.807360   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 01:38:15.807377   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 01:38:15.807389   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 01:38:15.807403   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 01:38:15.807415   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Creating domain...
	I0717 01:38:15.807430   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Checking permissions on dir: /home/jenkins
	I0717 01:38:15.807442   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Checking permissions on dir: /home
	I0717 01:38:15.807457   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Skipping /home - not owner
	I0717 01:38:15.808788   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) define libvirt domain using xml: 
	I0717 01:38:15.808808   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) <domain type='kvm'>
	I0717 01:38:15.808819   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   <name>kubernetes-upgrade-572332</name>
	I0717 01:38:15.808828   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   <memory unit='MiB'>2200</memory>
	I0717 01:38:15.808836   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   <vcpu>2</vcpu>
	I0717 01:38:15.808844   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   <features>
	I0717 01:38:15.808861   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <acpi/>
	I0717 01:38:15.808873   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <apic/>
	I0717 01:38:15.808882   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <pae/>
	I0717 01:38:15.808891   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     
	I0717 01:38:15.808900   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   </features>
	I0717 01:38:15.808911   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   <cpu mode='host-passthrough'>
	I0717 01:38:15.808923   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   
	I0717 01:38:15.808931   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   </cpu>
	I0717 01:38:15.808937   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   <os>
	I0717 01:38:15.808946   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <type>hvm</type>
	I0717 01:38:15.808955   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <boot dev='cdrom'/>
	I0717 01:38:15.808966   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <boot dev='hd'/>
	I0717 01:38:15.808978   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <bootmenu enable='no'/>
	I0717 01:38:15.808988   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   </os>
	I0717 01:38:15.808996   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   <devices>
	I0717 01:38:15.809014   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <disk type='file' device='cdrom'>
	I0717 01:38:15.809026   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/boot2docker.iso'/>
	I0717 01:38:15.809037   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <target dev='hdc' bus='scsi'/>
	I0717 01:38:15.809049   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <readonly/>
	I0717 01:38:15.809057   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     </disk>
	I0717 01:38:15.809069   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <disk type='file' device='disk'>
	I0717 01:38:15.809082   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 01:38:15.809098   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/kubernetes-upgrade-572332.rawdisk'/>
	I0717 01:38:15.809107   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <target dev='hda' bus='virtio'/>
	I0717 01:38:15.809113   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     </disk>
	I0717 01:38:15.809124   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <interface type='network'>
	I0717 01:38:15.809137   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <source network='mk-kubernetes-upgrade-572332'/>
	I0717 01:38:15.809148   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <model type='virtio'/>
	I0717 01:38:15.809160   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     </interface>
	I0717 01:38:15.809170   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <interface type='network'>
	I0717 01:38:15.809182   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <source network='default'/>
	I0717 01:38:15.809191   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <model type='virtio'/>
	I0717 01:38:15.809196   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     </interface>
	I0717 01:38:15.809206   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <serial type='pty'>
	I0717 01:38:15.809228   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <target port='0'/>
	I0717 01:38:15.809245   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     </serial>
	I0717 01:38:15.809257   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <console type='pty'>
	I0717 01:38:15.809268   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <target type='serial' port='0'/>
	I0717 01:38:15.809276   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     </console>
	I0717 01:38:15.809281   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     <rng model='virtio'>
	I0717 01:38:15.809294   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)       <backend model='random'>/dev/random</backend>
	I0717 01:38:15.809305   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     </rng>
	I0717 01:38:15.809316   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     
	I0717 01:38:15.809324   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)     
	I0717 01:38:15.809335   53310 main.go:141] libmachine: (kubernetes-upgrade-572332)   </devices>
	I0717 01:38:15.809345   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) </domain>
	I0717 01:38:15.809358   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) 
	I0717 01:38:15.814428   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:39:19:8b in network default
	I0717 01:38:15.815115   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Ensuring networks are active...
	I0717 01:38:15.815140   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:15.815765   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Ensuring network default is active
	I0717 01:38:15.816004   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Ensuring network mk-kubernetes-upgrade-572332 is active
	I0717 01:38:15.816593   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Getting domain xml...
	I0717 01:38:15.817384   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Creating domain...
	I0717 01:38:17.205326   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Waiting to get IP...
	I0717 01:38:17.206279   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:17.206780   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:17.206823   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:17.206761   53376 retry.go:31] will retry after 247.725184ms: waiting for machine to come up
	I0717 01:38:17.456143   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:17.456731   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:17.456758   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:17.456695   53376 retry.go:31] will retry after 313.6077ms: waiting for machine to come up
	I0717 01:38:17.772370   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:17.772865   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:17.772892   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:17.772833   53376 retry.go:31] will retry after 337.272005ms: waiting for machine to come up
	I0717 01:38:18.111775   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:18.112139   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:18.112164   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:18.112091   53376 retry.go:31] will retry after 475.895335ms: waiting for machine to come up
	I0717 01:38:18.589465   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:18.589856   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:18.589884   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:18.589805   53376 retry.go:31] will retry after 537.4261ms: waiting for machine to come up
	I0717 01:38:19.128666   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:19.129190   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:19.129230   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:19.129140   53376 retry.go:31] will retry after 850.016464ms: waiting for machine to come up
	I0717 01:38:19.980798   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:19.981246   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:19.981273   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:19.981222   53376 retry.go:31] will retry after 1.030959254s: waiting for machine to come up
	I0717 01:38:21.013779   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:21.014398   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:21.014435   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:21.014347   53376 retry.go:31] will retry after 1.458245719s: waiting for machine to come up
	I0717 01:38:22.474063   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:22.474467   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:22.474498   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:22.474421   53376 retry.go:31] will retry after 1.415667515s: waiting for machine to come up
	I0717 01:38:23.891962   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:23.892394   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:23.892420   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:23.892357   53376 retry.go:31] will retry after 1.864770473s: waiting for machine to come up
	I0717 01:38:25.758852   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:25.759314   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:25.759343   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:25.759215   53376 retry.go:31] will retry after 2.270019504s: waiting for machine to come up
	I0717 01:38:28.030828   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:28.031297   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:28.031324   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:28.031245   53376 retry.go:31] will retry after 3.125048236s: waiting for machine to come up
	I0717 01:38:31.157587   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:31.158043   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:38:31.158069   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:38:31.157972   53376 retry.go:31] will retry after 3.736230927s: waiting for machine to come up
	I0717 01:38:34.896451   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:34.896851   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Found IP for machine: 192.168.72.73
	I0717 01:38:34.896883   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has current primary IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:34.896893   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Reserving static IP address...
	I0717 01:38:34.897154   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-572332", mac: "52:54:00:e2:36:51", ip: "192.168.72.73"} in network mk-kubernetes-upgrade-572332
	I0717 01:38:34.969194   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Getting to WaitForSSH function...
	I0717 01:38:34.969224   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Reserved static IP address: 192.168.72.73
	I0717 01:38:34.969254   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Waiting for SSH to be available...
	I0717 01:38:34.971584   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:34.971893   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332
	I0717 01:38:34.971912   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-572332 interface with MAC address 52:54:00:e2:36:51
	I0717 01:38:34.972074   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Using SSH client type: external
	I0717 01:38:34.972115   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa (-rw-------)
	I0717 01:38:34.972150   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:38:34.972166   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | About to run SSH command:
	I0717 01:38:34.972182   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | exit 0
	I0717 01:38:34.975628   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | SSH cmd err, output: exit status 255: 
	I0717 01:38:34.975654   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 01:38:34.975664   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | command : exit 0
	I0717 01:38:34.975672   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | err     : exit status 255
	I0717 01:38:34.975683   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | output  : 
	I0717 01:38:37.978204   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Getting to WaitForSSH function...
	I0717 01:38:37.980275   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:37.980699   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:37.980730   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:37.980891   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Using SSH client type: external
	I0717 01:38:37.980919   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa (-rw-------)
	I0717 01:38:37.980959   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:38:37.980977   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | About to run SSH command:
	I0717 01:38:37.980988   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | exit 0
	I0717 01:38:38.110633   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | SSH cmd err, output: <nil>: 
	I0717 01:38:38.110904   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) KVM machine creation complete!
	I0717 01:38:38.111212   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetConfigRaw
	I0717 01:38:38.111816   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:38:38.111997   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:38:38.112135   53310 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 01:38:38.112150   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetState
	I0717 01:38:38.113355   53310 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 01:38:38.113367   53310 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 01:38:38.113372   53310 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 01:38:38.113378   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:38.115294   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.115692   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:38.115711   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.115872   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:38:38.116042   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.116206   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.116358   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:38:38.116537   53310 main.go:141] libmachine: Using SSH client type: native
	I0717 01:38:38.116745   53310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:38:38.116757   53310 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 01:38:38.229958   53310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:38:38.229983   53310 main.go:141] libmachine: Detecting the provisioner...
	I0717 01:38:38.230001   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:38.232848   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.233223   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:38.233250   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.233398   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:38:38.233608   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.233802   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.233974   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:38:38.234177   53310 main.go:141] libmachine: Using SSH client type: native
	I0717 01:38:38.234357   53310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:38:38.234371   53310 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 01:38:38.343157   53310 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 01:38:38.343230   53310 main.go:141] libmachine: found compatible host: buildroot
	I0717 01:38:38.343236   53310 main.go:141] libmachine: Provisioning with buildroot...
	I0717 01:38:38.343244   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetMachineName
	I0717 01:38:38.343482   53310 buildroot.go:166] provisioning hostname "kubernetes-upgrade-572332"
	I0717 01:38:38.343506   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetMachineName
	I0717 01:38:38.343692   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:38.346313   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.346694   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:38.346732   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.346873   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:38:38.347051   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.347204   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.347356   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:38:38.347494   53310 main.go:141] libmachine: Using SSH client type: native
	I0717 01:38:38.347672   53310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:38:38.347688   53310 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-572332 && echo "kubernetes-upgrade-572332" | sudo tee /etc/hostname
	I0717 01:38:38.473069   53310 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-572332
	
	I0717 01:38:38.473096   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:38.475806   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.476168   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:38.476202   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.476326   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:38:38.476508   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.476672   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.476827   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:38:38.476992   53310 main.go:141] libmachine: Using SSH client type: native
	I0717 01:38:38.477148   53310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:38:38.477164   53310 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-572332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-572332/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-572332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:38:38.596177   53310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:38:38.596221   53310 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:38:38.596261   53310 buildroot.go:174] setting up certificates
	I0717 01:38:38.596270   53310 provision.go:84] configureAuth start
	I0717 01:38:38.596279   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetMachineName
	I0717 01:38:38.596569   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetIP
	I0717 01:38:38.599250   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.599584   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:38.599612   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.599757   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:38.601921   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.602228   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:38.602268   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.602379   53310 provision.go:143] copyHostCerts
	I0717 01:38:38.602448   53310 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:38:38.602465   53310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:38:38.602532   53310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:38:38.602654   53310 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:38:38.602665   53310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:38:38.602695   53310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:38:38.602780   53310 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:38:38.602789   53310 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:38:38.602817   53310 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:38:38.602881   53310 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-572332 san=[127.0.0.1 192.168.72.73 kubernetes-upgrade-572332 localhost minikube]
	I0717 01:38:38.762318   53310 provision.go:177] copyRemoteCerts
	I0717 01:38:38.762385   53310 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:38:38.762420   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:38.765456   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.765834   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:38.765853   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.766004   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:38:38.766155   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.766307   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:38:38.766406   53310 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa Username:docker}
	I0717 01:38:38.853085   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:38:38.876316   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:38:38.899251   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 01:38:38.923464   53310 provision.go:87] duration metric: took 327.18355ms to configureAuth
	I0717 01:38:38.923496   53310 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:38:38.923688   53310 config.go:182] Loaded profile config "kubernetes-upgrade-572332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:38:38.923757   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:38.926231   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.926543   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:38.926587   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:38.926720   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:38:38.926926   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.927114   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:38.927269   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:38:38.927420   53310 main.go:141] libmachine: Using SSH client type: native
	I0717 01:38:38.927572   53310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:38:38.927586   53310 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:38:39.206521   53310 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:38:39.206592   53310 main.go:141] libmachine: Checking connection to Docker...
	I0717 01:38:39.206605   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetURL
	I0717 01:38:39.207933   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Using libvirt version 6000000
	I0717 01:38:39.209935   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.210246   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:39.210278   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.210403   53310 main.go:141] libmachine: Docker is up and running!
	I0717 01:38:39.210418   53310 main.go:141] libmachine: Reticulating splines...
	I0717 01:38:39.210426   53310 client.go:171] duration metric: took 23.881085743s to LocalClient.Create
	I0717 01:38:39.210448   53310 start.go:167] duration metric: took 23.881145552s to libmachine.API.Create "kubernetes-upgrade-572332"
	I0717 01:38:39.210459   53310 start.go:293] postStartSetup for "kubernetes-upgrade-572332" (driver="kvm2")
	I0717 01:38:39.210475   53310 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:38:39.210496   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:38:39.210743   53310 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:38:39.210768   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:39.212850   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.213137   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:39.213162   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.213301   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:38:39.213465   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:39.213645   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:38:39.213776   53310 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa Username:docker}
	I0717 01:38:39.300497   53310 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:38:39.304991   53310 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:38:39.305009   53310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:38:39.305080   53310 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:38:39.305154   53310 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:38:39.305237   53310 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:38:39.315371   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:38:39.338800   53310 start.go:296] duration metric: took 128.327054ms for postStartSetup
	I0717 01:38:39.338841   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetConfigRaw
	I0717 01:38:39.339423   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetIP
	I0717 01:38:39.342022   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.342346   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:39.342377   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.342533   53310 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/config.json ...
	I0717 01:38:39.342754   53310 start.go:128] duration metric: took 24.035000641s to createHost
	I0717 01:38:39.342781   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:39.344876   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.345176   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:39.345203   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.345347   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:38:39.345518   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:39.345661   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:39.345829   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:38:39.345993   53310 main.go:141] libmachine: Using SSH client type: native
	I0717 01:38:39.346185   53310 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:38:39.346198   53310 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 01:38:39.459587   53310 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180319.434085826
	
	I0717 01:38:39.459619   53310 fix.go:216] guest clock: 1721180319.434085826
	I0717 01:38:39.459631   53310 fix.go:229] Guest: 2024-07-17 01:38:39.434085826 +0000 UTC Remote: 2024-07-17 01:38:39.34276795 +0000 UTC m=+28.856730464 (delta=91.317876ms)
	I0717 01:38:39.459690   53310 fix.go:200] guest clock delta is within tolerance: 91.317876ms
	I0717 01:38:39.459699   53310 start.go:83] releasing machines lock for "kubernetes-upgrade-572332", held for 24.152098968s
	I0717 01:38:39.459739   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:38:39.459994   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetIP
	I0717 01:38:39.462888   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.463440   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:39.463441   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:38:39.463465   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.464034   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:38:39.464239   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:38:39.464329   53310 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:38:39.464372   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:39.464454   53310 ssh_runner.go:195] Run: cat /version.json
	I0717 01:38:39.464481   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:38:39.467255   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.467596   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:39.467626   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.467651   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.467829   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:38:39.468020   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:39.468081   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:39.468141   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:39.468182   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:38:39.468319   53310 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa Username:docker}
	I0717 01:38:39.468386   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:38:39.468529   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:38:39.468678   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:38:39.468834   53310 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa Username:docker}
	I0717 01:38:39.576362   53310 ssh_runner.go:195] Run: systemctl --version
	I0717 01:38:39.583188   53310 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:38:39.745620   53310 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:38:39.751897   53310 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:38:39.751963   53310 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:38:39.767945   53310 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:38:39.767968   53310 start.go:495] detecting cgroup driver to use...
	I0717 01:38:39.768041   53310 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:38:39.788678   53310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:38:39.806423   53310 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:38:39.806471   53310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:38:39.823692   53310 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:38:39.840205   53310 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:38:39.960091   53310 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:38:40.108854   53310 docker.go:233] disabling docker service ...
	I0717 01:38:40.108947   53310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:38:40.123250   53310 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:38:40.135667   53310 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:38:40.282509   53310 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:38:40.405213   53310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:38:40.419474   53310 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:38:40.438017   53310 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:38:40.438102   53310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:38:40.448818   53310 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:38:40.448903   53310 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:38:40.459159   53310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:38:40.469482   53310 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:38:40.479756   53310 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:38:40.491177   53310 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:38:40.501271   53310 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:38:40.501330   53310 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:38:40.514764   53310 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:38:40.525234   53310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:38:40.666677   53310 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:38:40.820367   53310 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:38:40.820469   53310 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:38:40.826665   53310 start.go:563] Will wait 60s for crictl version
	I0717 01:38:40.826724   53310 ssh_runner.go:195] Run: which crictl
	I0717 01:38:40.831097   53310 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:38:40.874630   53310 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:38:40.874698   53310 ssh_runner.go:195] Run: crio --version
	I0717 01:38:40.904622   53310 ssh_runner.go:195] Run: crio --version
	I0717 01:38:40.934108   53310 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:38:40.935342   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetIP
	I0717 01:38:40.938566   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:40.939007   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:38:29 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:38:40.939048   53310 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:38:40.939208   53310 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:38:40.943459   53310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:38:40.963356   53310 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-572332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-572332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:38:40.963449   53310 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:38:40.963490   53310 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:38:41.013208   53310 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:38:41.013299   53310 ssh_runner.go:195] Run: which lz4
	I0717 01:38:41.017502   53310 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 01:38:41.021860   53310 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:38:41.021911   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:38:42.751299   53310 crio.go:462] duration metric: took 1.733821434s to copy over tarball
	I0717 01:38:42.751380   53310 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:38:45.246645   53310 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.495231728s)
	I0717 01:38:45.246678   53310 crio.go:469] duration metric: took 2.495349867s to extract the tarball
	I0717 01:38:45.246688   53310 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:38:45.289087   53310 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:38:45.342422   53310 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:38:45.342445   53310 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:38:45.342499   53310 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:38:45.342561   53310 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:38:45.342568   53310 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:38:45.342508   53310 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:38:45.342598   53310 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:38:45.342571   53310 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:38:45.342626   53310 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:38:45.342534   53310 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:38:45.344155   53310 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:38:45.344171   53310 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:38:45.344180   53310 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:38:45.344189   53310 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:38:45.344216   53310 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:38:45.344217   53310 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:38:45.344246   53310 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:38:45.344284   53310 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:38:45.561591   53310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:38:45.586162   53310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:38:45.604839   53310 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:38:45.604894   53310 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:38:45.604953   53310 ssh_runner.go:195] Run: which crictl
	I0717 01:38:45.632819   53310 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:38:45.632870   53310 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:38:45.632901   53310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:38:45.632908   53310 ssh_runner.go:195] Run: which crictl
	I0717 01:38:45.669265   53310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:38:45.670336   53310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:38:45.670427   53310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:38:45.683175   53310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:38:45.693872   53310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:38:45.695826   53310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:38:45.696697   53310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:38:45.752020   53310 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:38:45.752066   53310 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:38:45.752113   53310 ssh_runner.go:195] Run: which crictl
	I0717 01:38:45.760627   53310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:38:45.811705   53310 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:38:45.811750   53310 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:38:45.811795   53310 ssh_runner.go:195] Run: which crictl
	I0717 01:38:45.825230   53310 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:38:45.825276   53310 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:38:45.825308   53310 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:38:45.825323   53310 ssh_runner.go:195] Run: which crictl
	I0717 01:38:45.825345   53310 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:38:45.825347   53310 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:38:45.825385   53310 ssh_runner.go:195] Run: which crictl
	I0717 01:38:45.825386   53310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:38:45.825398   53310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:38:45.825402   53310 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:38:45.825432   53310 ssh_runner.go:195] Run: which crictl
	I0717 01:38:45.841327   53310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:38:45.841346   53310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:38:45.896787   53310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:38:45.905270   53310 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:38:45.905391   53310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:38:45.940805   53310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:38:45.940812   53310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:38:45.958169   53310 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:38:47.100320   53310 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:38:47.241062   53310 cache_images.go:92] duration metric: took 1.898601384s to LoadCachedImages
	W0717 01:38:47.241213   53310 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0717 01:38:47.241233   53310 kubeadm.go:934] updating node { 192.168.72.73 8443 v1.20.0 crio true true} ...
	I0717 01:38:47.241351   53310 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-572332 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-572332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:38:47.241422   53310 ssh_runner.go:195] Run: crio config
	I0717 01:38:47.289485   53310 cni.go:84] Creating CNI manager for ""
	I0717 01:38:47.289514   53310 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:38:47.289528   53310 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:38:47.289547   53310 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.73 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-572332 NodeName:kubernetes-upgrade-572332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:38:47.289696   53310 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-572332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:38:47.289762   53310 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:38:47.300206   53310 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:38:47.300277   53310 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:38:47.310059   53310 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0717 01:38:47.328821   53310 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:38:47.345502   53310 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 01:38:47.362946   53310 ssh_runner.go:195] Run: grep 192.168.72.73	control-plane.minikube.internal$ /etc/hosts
	I0717 01:38:47.367165   53310 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:38:47.380056   53310 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:38:47.510136   53310 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:38:47.529299   53310 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332 for IP: 192.168.72.73
	I0717 01:38:47.529323   53310 certs.go:194] generating shared ca certs ...
	I0717 01:38:47.529343   53310 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:47.529514   53310 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:38:47.529574   53310 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:38:47.529588   53310 certs.go:256] generating profile certs ...
	I0717 01:38:47.529664   53310 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/client.key
	I0717 01:38:47.529681   53310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/client.crt with IP's: []
	I0717 01:38:47.770097   53310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/client.crt ...
	I0717 01:38:47.770135   53310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/client.crt: {Name:mka3c342f128515877044d95a5b5b6b0e424cc6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:47.770341   53310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/client.key ...
	I0717 01:38:47.770361   53310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/client.key: {Name:mk85e40398bbb53ea17230c17458f3d16b9ebafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:47.770468   53310 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.key.e1dd5c49
	I0717 01:38:47.770491   53310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.crt.e1dd5c49 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.73]
	I0717 01:38:47.832143   53310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.crt.e1dd5c49 ...
	I0717 01:38:47.832169   53310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.crt.e1dd5c49: {Name:mk023f6be9b290907b6d57013fd33f506ce37c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:47.832332   53310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.key.e1dd5c49 ...
	I0717 01:38:47.832349   53310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.key.e1dd5c49: {Name:mk46e6b999b8ef2c67cc7a3de27b954b9351787a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:47.832439   53310 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.crt.e1dd5c49 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.crt
	I0717 01:38:47.832546   53310 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.key.e1dd5c49 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.key
	I0717 01:38:47.832628   53310 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.key
	I0717 01:38:47.832648   53310 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.crt with IP's: []
	I0717 01:38:47.970528   53310 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.crt ...
	I0717 01:38:47.970580   53310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.crt: {Name:mk9aaf8991406b5eb3338ad67a82e88539cd3fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:47.970785   53310 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.key ...
	I0717 01:38:47.970806   53310 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.key: {Name:mka369dc24af836e24a035d10d38a702ac32bb90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:38:47.971004   53310 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:38:47.971051   53310 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:38:47.971065   53310 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:38:47.971099   53310 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:38:47.971134   53310 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:38:47.971169   53310 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:38:47.971222   53310 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:38:47.971777   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:38:48.000818   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:38:48.028763   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:38:48.056605   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:38:48.084858   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 01:38:48.115369   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:38:48.142297   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:38:48.172317   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:38:48.199937   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:38:48.224213   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:38:48.250025   53310 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:38:48.278337   53310 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:38:48.300061   53310 ssh_runner.go:195] Run: openssl version
	I0717 01:38:48.306449   53310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:38:48.317208   53310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:38:48.321951   53310 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:38:48.322023   53310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:38:48.328027   53310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:38:48.339123   53310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:38:48.350892   53310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:38:48.355293   53310 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:38:48.355372   53310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:38:48.363087   53310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:38:48.381718   53310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:38:48.398500   53310 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:48.403395   53310 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:48.403457   53310 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:38:48.409645   53310 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:38:48.421723   53310 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:38:48.429041   53310 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 01:38:48.429105   53310 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-572332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-572332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:38:48.429193   53310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:38:48.429247   53310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:38:48.483626   53310 cri.go:89] found id: ""
	I0717 01:38:48.483693   53310 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:38:48.498938   53310 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:38:48.511481   53310 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:38:48.524610   53310 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:38:48.524631   53310 kubeadm.go:157] found existing configuration files:
	
	I0717 01:38:48.524680   53310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:38:48.534655   53310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:38:48.534722   53310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:38:48.545472   53310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:38:48.555193   53310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:38:48.555258   53310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:38:48.565637   53310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:38:48.577842   53310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:38:48.577915   53310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:38:48.588024   53310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:38:48.598904   53310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:38:48.598987   53310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:38:48.609159   53310 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:38:48.735329   53310 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:38:48.735495   53310 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:38:48.920775   53310 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:38:48.920901   53310 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:38:48.921026   53310 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:38:49.122577   53310 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:38:49.125628   53310 out.go:204]   - Generating certificates and keys ...
	I0717 01:38:49.125734   53310 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:38:49.125843   53310 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:38:49.362580   53310 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 01:38:49.808986   53310 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 01:38:49.965730   53310 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 01:38:50.216693   53310 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 01:38:50.310811   53310 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 01:38:50.311009   53310 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-572332 localhost] and IPs [192.168.72.73 127.0.0.1 ::1]
	I0717 01:38:50.422910   53310 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 01:38:50.423154   53310 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-572332 localhost] and IPs [192.168.72.73 127.0.0.1 ::1]
	I0717 01:38:50.709836   53310 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 01:38:50.864317   53310 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 01:38:50.944109   53310 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 01:38:50.944407   53310 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:38:51.046731   53310 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:38:51.147177   53310 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:38:51.320854   53310 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:38:51.479937   53310 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:38:51.496064   53310 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:38:51.496845   53310 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:38:51.496909   53310 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:38:51.623772   53310 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:38:51.625812   53310 out.go:204]   - Booting up control plane ...
	I0717 01:38:51.625923   53310 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:38:51.637079   53310 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:38:51.638826   53310 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:38:51.639997   53310 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:38:51.644476   53310 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:39:31.639642   53310 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:39:31.640257   53310 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:39:31.640501   53310 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:39:36.640906   53310 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:39:36.641240   53310 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:39:46.640248   53310 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:39:46.640512   53310 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:40:06.640046   53310 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:40:06.640305   53310 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:40:46.642201   53310 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:40:46.642504   53310 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:40:46.642523   53310 kubeadm.go:310] 
	I0717 01:40:46.642584   53310 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:40:46.642665   53310 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:40:46.642682   53310 kubeadm.go:310] 
	I0717 01:40:46.642722   53310 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:40:46.642760   53310 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:40:46.642934   53310 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:40:46.642949   53310 kubeadm.go:310] 
	I0717 01:40:46.643076   53310 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:40:46.643129   53310 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:40:46.643191   53310 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:40:46.643210   53310 kubeadm.go:310] 
	I0717 01:40:46.643398   53310 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:40:46.643552   53310 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:40:46.643566   53310 kubeadm.go:310] 
	I0717 01:40:46.643714   53310 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:40:46.643847   53310 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:40:46.643952   53310 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:40:46.644060   53310 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:40:46.644097   53310 kubeadm.go:310] 
	I0717 01:40:46.644259   53310 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:40:46.644369   53310 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:40:46.644470   53310 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0717 01:40:46.644769   53310 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-572332 localhost] and IPs [192.168.72.73 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-572332 localhost] and IPs [192.168.72.73 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-572332 localhost] and IPs [192.168.72.73 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-572332 localhost] and IPs [192.168.72.73 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 01:40:46.644824   53310 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 01:40:47.680759   53310 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.035907448s)
	I0717 01:40:47.680847   53310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:40:47.695521   53310 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:40:47.705347   53310 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:40:47.705369   53310 kubeadm.go:157] found existing configuration files:
	
	I0717 01:40:47.705422   53310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:40:47.715387   53310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:40:47.715458   53310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:40:47.725343   53310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:40:47.734560   53310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:40:47.734617   53310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:40:47.743788   53310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:40:47.752796   53310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:40:47.752853   53310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:40:47.761938   53310 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:40:47.770588   53310 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:40:47.770647   53310 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:40:47.779753   53310 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:40:47.855322   53310 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:40:47.855424   53310 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:40:48.015603   53310 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:40:48.015860   53310 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:40:48.016098   53310 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:40:48.215382   53310 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:40:48.372158   53310 out.go:204]   - Generating certificates and keys ...
	I0717 01:40:48.372274   53310 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:40:48.372360   53310 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:40:48.372500   53310 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:40:48.372618   53310 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:40:48.372733   53310 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:40:48.372814   53310 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:40:48.372896   53310 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:40:48.373011   53310 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:40:48.373137   53310 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:40:48.373272   53310 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:40:48.373329   53310 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:40:48.373410   53310 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:40:48.440302   53310 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:40:48.556315   53310 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:40:49.205975   53310 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:40:49.391862   53310 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:40:49.408330   53310 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:40:49.409919   53310 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:40:49.409989   53310 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:40:49.581242   53310 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:40:49.672831   53310 out.go:204]   - Booting up control plane ...
	I0717 01:40:49.673016   53310 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:40:49.673145   53310 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:40:49.673241   53310 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:40:49.673359   53310 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:40:49.673606   53310 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:41:29.597453   53310 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:41:29.597610   53310 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:41:29.597845   53310 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:41:34.598508   53310 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:41:34.598762   53310 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:41:44.599332   53310 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:41:44.599513   53310 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:42:04.598449   53310 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:42:04.598706   53310 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:42:44.598661   53310 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:42:44.598936   53310 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:42:44.598983   53310 kubeadm.go:310] 
	I0717 01:42:44.599054   53310 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:42:44.599115   53310 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:42:44.599124   53310 kubeadm.go:310] 
	I0717 01:42:44.599184   53310 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:42:44.599234   53310 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:42:44.599406   53310 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:42:44.599427   53310 kubeadm.go:310] 
	I0717 01:42:44.599552   53310 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:42:44.599602   53310 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:42:44.599649   53310 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:42:44.599658   53310 kubeadm.go:310] 
	I0717 01:42:44.599786   53310 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:42:44.599894   53310 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:42:44.599911   53310 kubeadm.go:310] 
	I0717 01:42:44.600061   53310 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:42:44.600179   53310 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:42:44.600279   53310 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:42:44.600374   53310 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:42:44.600387   53310 kubeadm.go:310] 
	I0717 01:42:44.601030   53310 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:42:44.601181   53310 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:42:44.601279   53310 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 01:42:44.601378   53310 kubeadm.go:394] duration metric: took 3m56.172276294s to StartCluster
	I0717 01:42:44.601459   53310 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:42:44.601530   53310 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:42:44.663292   53310 cri.go:89] found id: ""
	I0717 01:42:44.663324   53310 logs.go:276] 0 containers: []
	W0717 01:42:44.663336   53310 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:42:44.663345   53310 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:42:44.663420   53310 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:42:44.713416   53310 cri.go:89] found id: ""
	I0717 01:42:44.713444   53310 logs.go:276] 0 containers: []
	W0717 01:42:44.713455   53310 logs.go:278] No container was found matching "etcd"
	I0717 01:42:44.713462   53310 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:42:44.713522   53310 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:42:44.763598   53310 cri.go:89] found id: ""
	I0717 01:42:44.763623   53310 logs.go:276] 0 containers: []
	W0717 01:42:44.763633   53310 logs.go:278] No container was found matching "coredns"
	I0717 01:42:44.763640   53310 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:42:44.763701   53310 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:42:44.817084   53310 cri.go:89] found id: ""
	I0717 01:42:44.817113   53310 logs.go:276] 0 containers: []
	W0717 01:42:44.817123   53310 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:42:44.817132   53310 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:42:44.817199   53310 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:42:44.862846   53310 cri.go:89] found id: ""
	I0717 01:42:44.862882   53310 logs.go:276] 0 containers: []
	W0717 01:42:44.862894   53310 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:42:44.862903   53310 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:42:44.862968   53310 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:42:44.904077   53310 cri.go:89] found id: ""
	I0717 01:42:44.904107   53310 logs.go:276] 0 containers: []
	W0717 01:42:44.904118   53310 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:42:44.904126   53310 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:42:44.904185   53310 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:42:44.941694   53310 cri.go:89] found id: ""
	I0717 01:42:44.941727   53310 logs.go:276] 0 containers: []
	W0717 01:42:44.941739   53310 logs.go:278] No container was found matching "kindnet"
	I0717 01:42:44.941750   53310 logs.go:123] Gathering logs for kubelet ...
	I0717 01:42:44.941765   53310 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:42:45.005053   53310 logs.go:123] Gathering logs for dmesg ...
	I0717 01:42:45.005087   53310 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:42:45.019711   53310 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:42:45.019743   53310 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:42:45.202776   53310 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:42:45.202804   53310 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:42:45.202821   53310 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:42:45.317188   53310 logs.go:123] Gathering logs for container status ...
	I0717 01:42:45.317228   53310 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 01:42:45.368225   53310 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 01:42:45.368310   53310 out.go:239] * 
	* 
	W0717 01:42:45.368404   53310 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:42:45.368455   53310 out.go:239] * 
	* 
	W0717 01:42:45.369246   53310 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:42:45.372854   53310 out.go:177] 
	W0717 01:42:45.374206   53310 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:42:45.374285   53310 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 01:42:45.374311   53310 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 01:42:45.376055   53310 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-572332 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-572332
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-572332: (1.444814281s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-572332 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-572332 status --format={{.Host}}: exit status 7 (63.996715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-572332 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0717 01:42:58.379784   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-572332 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.452232959s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-572332 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-572332 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-572332 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (88.20791ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-572332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-572332
	    minikube start -p kubernetes-upgrade-572332 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5723322 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-572332 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-572332 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-572332 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.606062471s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-17 01:43:57.167409706 +0000 UTC m=+4936.215290827
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-572332 -n kubernetes-upgrade-572332
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-572332 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-572332 logs -n 25: (1.650760425s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo cat                           | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo cat                           | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo cat                           | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo docker                        | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo cat                           | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo cat                           | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo cat                           | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo cat                           | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo                               | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo find                          | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 sudo crio                          | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-894370                                    | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	| start   | -p enable-default-cni-894370                         | enable-default-cni-894370 | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:43:34
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:43:34.106226   60250 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:43:34.106340   60250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:43:34.106348   60250 out.go:304] Setting ErrFile to fd 2...
	I0717 01:43:34.106351   60250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:43:34.106519   60250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:43:34.107100   60250 out.go:298] Setting JSON to false
	I0717 01:43:34.108116   60250 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5156,"bootTime":1721175458,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:43:34.108169   60250 start.go:139] virtualization: kvm guest
	I0717 01:43:34.110344   60250 out.go:177] * [enable-default-cni-894370] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:43:34.111740   60250 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:43:34.111795   60250 notify.go:220] Checking for updates...
	I0717 01:43:34.114366   60250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:43:34.115744   60250 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:43:34.116999   60250 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:43:34.118340   60250 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:43:34.119514   60250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:43:34.121161   60250 config.go:182] Loaded profile config "calico-894370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:43:34.121269   60250 config.go:182] Loaded profile config "custom-flannel-894370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:43:34.121366   60250 config.go:182] Loaded profile config "kubernetes-upgrade-572332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:43:34.121454   60250 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:43:34.156706   60250 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:43:34.157879   60250 start.go:297] selected driver: kvm2
	I0717 01:43:34.157897   60250 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:43:34.157913   60250 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:43:34.158610   60250 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:43:34.158690   60250 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:43:34.173118   60250 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:43:34.173167   60250 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0717 01:43:34.173350   60250 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0717 01:43:34.173380   60250 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:43:34.173411   60250 cni.go:84] Creating CNI manager for "bridge"
	I0717 01:43:34.173419   60250 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 01:43:34.173485   60250 start.go:340] cluster config:
	{Name:enable-default-cni-894370 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:enable-default-cni-894370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:43:34.173601   60250 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:43:34.175196   60250 out.go:177] * Starting "enable-default-cni-894370" primary control-plane node in "enable-default-cni-894370" cluster
	I0717 01:43:35.771603   59210 start.go:364] duration metric: took 9.021668936s to acquireMachinesLock for "kubernetes-upgrade-572332"
	I0717 01:43:35.771660   59210 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:43:35.771670   59210 fix.go:54] fixHost starting: 
	I0717 01:43:35.772038   59210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:43:35.772080   59210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:43:35.788274   59210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I0717 01:43:35.788714   59210 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:43:35.789200   59210 main.go:141] libmachine: Using API Version  1
	I0717 01:43:35.789223   59210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:43:35.789562   59210 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:43:35.789728   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:43:35.789884   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetState
	I0717 01:43:35.791265   59210 fix.go:112] recreateIfNeeded on kubernetes-upgrade-572332: state=Running err=<nil>
	W0717 01:43:35.791284   59210 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:43:35.793223   59210 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-572332" VM ...
	I0717 01:43:34.302936   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.303854   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has current primary IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.303874   57364 main.go:141] libmachine: (calico-894370) Found IP for machine: 192.168.39.194
	I0717 01:43:34.303886   57364 main.go:141] libmachine: (calico-894370) Reserving static IP address...
	I0717 01:43:34.304337   57364 main.go:141] libmachine: (calico-894370) DBG | unable to find host DHCP lease matching {name: "calico-894370", mac: "52:54:00:6d:f0:3d", ip: "192.168.39.194"} in network mk-calico-894370
	I0717 01:43:34.377672   57364 main.go:141] libmachine: (calico-894370) DBG | Getting to WaitForSSH function...
	I0717 01:43:34.377701   57364 main.go:141] libmachine: (calico-894370) Reserved static IP address: 192.168.39.194
	I0717 01:43:34.377755   57364 main.go:141] libmachine: (calico-894370) Waiting for SSH to be available...
	I0717 01:43:34.380192   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.380531   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:34.380559   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.380643   57364 main.go:141] libmachine: (calico-894370) DBG | Using SSH client type: external
	I0717 01:43:34.380670   57364 main.go:141] libmachine: (calico-894370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/calico-894370/id_rsa (-rw-------)
	I0717 01:43:34.380714   57364 main.go:141] libmachine: (calico-894370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.194 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/calico-894370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:43:34.380730   57364 main.go:141] libmachine: (calico-894370) DBG | About to run SSH command:
	I0717 01:43:34.380748   57364 main.go:141] libmachine: (calico-894370) DBG | exit 0
	I0717 01:43:34.510679   57364 main.go:141] libmachine: (calico-894370) DBG | SSH cmd err, output: <nil>: 
	I0717 01:43:34.511009   57364 main.go:141] libmachine: (calico-894370) KVM machine creation complete!
	I0717 01:43:34.511315   57364 main.go:141] libmachine: (calico-894370) Calling .GetConfigRaw
	I0717 01:43:34.511845   57364 main.go:141] libmachine: (calico-894370) Calling .DriverName
	I0717 01:43:34.512066   57364 main.go:141] libmachine: (calico-894370) Calling .DriverName
	I0717 01:43:34.512229   57364 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 01:43:34.512242   57364 main.go:141] libmachine: (calico-894370) Calling .GetState
	I0717 01:43:34.513563   57364 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 01:43:34.513582   57364 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 01:43:34.513590   57364 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 01:43:34.513599   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:34.515797   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.516161   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:34.516186   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.516327   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHPort
	I0717 01:43:34.516513   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:34.516676   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:34.516823   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHUsername
	I0717 01:43:34.517001   57364 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:34.517185   57364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0717 01:43:34.517196   57364 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 01:43:34.625905   57364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:43:34.625933   57364 main.go:141] libmachine: Detecting the provisioner...
	I0717 01:43:34.625944   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:34.628856   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.629241   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:34.629270   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.629417   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHPort
	I0717 01:43:34.629576   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:34.629767   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:34.629909   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHUsername
	I0717 01:43:34.630053   57364 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:34.630206   57364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0717 01:43:34.630217   57364 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 01:43:34.735149   57364 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 01:43:34.735208   57364 main.go:141] libmachine: found compatible host: buildroot
	I0717 01:43:34.735214   57364 main.go:141] libmachine: Provisioning with buildroot...
	I0717 01:43:34.735221   57364 main.go:141] libmachine: (calico-894370) Calling .GetMachineName
	I0717 01:43:34.735466   57364 buildroot.go:166] provisioning hostname "calico-894370"
	I0717 01:43:34.735489   57364 main.go:141] libmachine: (calico-894370) Calling .GetMachineName
	I0717 01:43:34.735669   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:34.738240   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.738648   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:34.738674   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.738860   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHPort
	I0717 01:43:34.739033   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:34.739186   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:34.739310   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHUsername
	I0717 01:43:34.739431   57364 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:34.739627   57364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0717 01:43:34.739641   57364 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-894370 && echo "calico-894370" | sudo tee /etc/hostname
	I0717 01:43:34.865127   57364 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-894370
	
	I0717 01:43:34.865168   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:34.867738   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.868055   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:34.868082   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.868232   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHPort
	I0717 01:43:34.868426   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:34.868597   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:34.868696   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHUsername
	I0717 01:43:34.868859   57364 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:34.869037   57364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0717 01:43:34.869061   57364 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-894370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-894370/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-894370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:43:34.983965   57364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:43:34.984005   57364 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:43:34.984038   57364 buildroot.go:174] setting up certificates
	I0717 01:43:34.984049   57364 provision.go:84] configureAuth start
	I0717 01:43:34.984065   57364 main.go:141] libmachine: (calico-894370) Calling .GetMachineName
	I0717 01:43:34.984342   57364 main.go:141] libmachine: (calico-894370) Calling .GetIP
	I0717 01:43:34.986902   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.987358   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:34.987386   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.987518   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:34.989554   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.989836   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:34.989865   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:34.989998   57364 provision.go:143] copyHostCerts
	I0717 01:43:34.990068   57364 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:43:34.990077   57364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:43:34.990129   57364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:43:34.990221   57364 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:43:34.990229   57364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:43:34.990247   57364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:43:34.990316   57364 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:43:34.990323   57364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:43:34.990339   57364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:43:34.990393   57364 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.calico-894370 san=[127.0.0.1 192.168.39.194 calico-894370 localhost minikube]
	I0717 01:43:35.085708   57364 provision.go:177] copyRemoteCerts
	I0717 01:43:35.085760   57364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:43:35.085782   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:35.088271   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.088573   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:35.088608   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.088785   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHPort
	I0717 01:43:35.088932   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:35.089084   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHUsername
	I0717 01:43:35.089177   57364 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/calico-894370/id_rsa Username:docker}
	I0717 01:43:35.173098   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:43:35.197831   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 01:43:35.221740   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:43:35.246525   57364 provision.go:87] duration metric: took 262.460853ms to configureAuth
	I0717 01:43:35.246570   57364 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:43:35.246782   57364 config.go:182] Loaded profile config "calico-894370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:43:35.246867   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:35.249342   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.249698   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:35.249727   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.249865   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHPort
	I0717 01:43:35.250034   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:35.250195   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:35.250310   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHUsername
	I0717 01:43:35.250460   57364 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:35.250673   57364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0717 01:43:35.250695   57364 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:43:35.524054   57364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:43:35.524076   57364 main.go:141] libmachine: Checking connection to Docker...
	I0717 01:43:35.524083   57364 main.go:141] libmachine: (calico-894370) Calling .GetURL
	I0717 01:43:35.525338   57364 main.go:141] libmachine: (calico-894370) DBG | Using libvirt version 6000000
	I0717 01:43:35.527800   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.528158   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:35.528190   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.528351   57364 main.go:141] libmachine: Docker is up and running!
	I0717 01:43:35.528364   57364 main.go:141] libmachine: Reticulating splines...
	I0717 01:43:35.528370   57364 client.go:171] duration metric: took 24.300690704s to LocalClient.Create
	I0717 01:43:35.528390   57364 start.go:167] duration metric: took 24.300754181s to libmachine.API.Create "calico-894370"
	I0717 01:43:35.528396   57364 start.go:293] postStartSetup for "calico-894370" (driver="kvm2")
	I0717 01:43:35.528405   57364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:43:35.528420   57364 main.go:141] libmachine: (calico-894370) Calling .DriverName
	I0717 01:43:35.528647   57364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:43:35.528668   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:35.530775   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.531087   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:35.531112   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.531189   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHPort
	I0717 01:43:35.531360   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:35.531506   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHUsername
	I0717 01:43:35.531659   57364 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/calico-894370/id_rsa Username:docker}
	I0717 01:43:35.613407   57364 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:43:35.618543   57364 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:43:35.618585   57364 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:43:35.618644   57364 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:43:35.618715   57364 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:43:35.618799   57364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:43:35.628674   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:43:35.655036   57364 start.go:296] duration metric: took 126.628657ms for postStartSetup
	I0717 01:43:35.655082   57364 main.go:141] libmachine: (calico-894370) Calling .GetConfigRaw
	I0717 01:43:35.655630   57364 main.go:141] libmachine: (calico-894370) Calling .GetIP
	I0717 01:43:35.658085   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.658391   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:35.658409   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.658674   57364 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/config.json ...
	I0717 01:43:35.658869   57364 start.go:128] duration metric: took 24.454185673s to createHost
	I0717 01:43:35.658896   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:35.661054   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.661458   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:35.661497   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.661626   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHPort
	I0717 01:43:35.661799   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:35.661963   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:35.662092   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHUsername
	I0717 01:43:35.662286   57364 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:35.662475   57364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0717 01:43:35.662500   57364 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:43:35.771461   57364 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180615.745455814
	
	I0717 01:43:35.771484   57364 fix.go:216] guest clock: 1721180615.745455814
	I0717 01:43:35.771493   57364 fix.go:229] Guest: 2024-07-17 01:43:35.745455814 +0000 UTC Remote: 2024-07-17 01:43:35.6588826 +0000 UTC m=+24.593227017 (delta=86.573214ms)
	I0717 01:43:35.771512   57364 fix.go:200] guest clock delta is within tolerance: 86.573214ms
	I0717 01:43:35.771531   57364 start.go:83] releasing machines lock for "calico-894370", held for 24.566924796s
	I0717 01:43:35.771556   57364 main.go:141] libmachine: (calico-894370) Calling .DriverName
	I0717 01:43:35.771830   57364 main.go:141] libmachine: (calico-894370) Calling .GetIP
	I0717 01:43:35.774319   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.774637   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:35.774660   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.774883   57364 main.go:141] libmachine: (calico-894370) Calling .DriverName
	I0717 01:43:35.775336   57364 main.go:141] libmachine: (calico-894370) Calling .DriverName
	I0717 01:43:35.775509   57364 main.go:141] libmachine: (calico-894370) Calling .DriverName
	I0717 01:43:35.775602   57364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:43:35.775646   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:35.775740   57364 ssh_runner.go:195] Run: cat /version.json
	I0717 01:43:35.775765   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHHostname
	I0717 01:43:35.778227   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.778372   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.778626   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:35.778654   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.778682   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:35.778697   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:35.778802   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHPort
	I0717 01:43:35.778932   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHPort
	I0717 01:43:35.779034   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:35.779108   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHKeyPath
	I0717 01:43:35.779176   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHUsername
	I0717 01:43:35.779248   57364 main.go:141] libmachine: (calico-894370) Calling .GetSSHUsername
	I0717 01:43:35.779316   57364 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/calico-894370/id_rsa Username:docker}
	I0717 01:43:35.779357   57364 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/calico-894370/id_rsa Username:docker}
	I0717 01:43:35.893288   57364 ssh_runner.go:195] Run: systemctl --version
	I0717 01:43:35.899059   57364 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:43:36.063274   57364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:43:36.070902   57364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:43:36.070979   57364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:43:36.088002   57364 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:43:36.088023   57364 start.go:495] detecting cgroup driver to use...
	I0717 01:43:36.088089   57364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:43:36.106682   57364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:43:35.794635   59210 machine.go:94] provisionDockerMachine start ...
	I0717 01:43:35.794654   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:43:35.794838   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:35.797060   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:35.797462   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:35.797493   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:35.797606   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:35.797764   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:35.797888   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:35.798053   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:35.798216   59210 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:35.798397   59210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:43:35.798410   59210 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:43:35.918774   59210 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-572332
	
	I0717 01:43:35.918803   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetMachineName
	I0717 01:43:35.919049   59210 buildroot.go:166] provisioning hostname "kubernetes-upgrade-572332"
	I0717 01:43:35.919068   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetMachineName
	I0717 01:43:35.919257   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:35.921779   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:35.922123   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:35.922153   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:35.922234   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:35.922402   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:35.922611   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:35.922789   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:35.922963   59210 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:35.923172   59210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:43:35.923187   59210 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-572332 && echo "kubernetes-upgrade-572332" | sudo tee /etc/hostname
	I0717 01:43:36.056635   59210 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-572332
	
	I0717 01:43:36.056661   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:36.059112   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:36.059450   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:36.059490   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:36.059630   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:36.059812   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:36.059990   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:36.060138   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:36.060303   59210 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:36.060481   59210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:43:36.060498   59210 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-572332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-572332/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-572332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:43:36.175494   59210 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:43:36.175522   59210 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:43:36.175542   59210 buildroot.go:174] setting up certificates
	I0717 01:43:36.175553   59210 provision.go:84] configureAuth start
	I0717 01:43:36.175564   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetMachineName
	I0717 01:43:36.175819   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetIP
	I0717 01:43:36.178284   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:36.178666   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:36.178692   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:36.178843   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:36.181061   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:36.181430   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:36.181463   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:36.181542   59210 provision.go:143] copyHostCerts
	I0717 01:43:36.181621   59210 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:43:36.181633   59210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:43:36.181688   59210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:43:36.181813   59210 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:43:36.181822   59210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:43:36.181845   59210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:43:36.181918   59210 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:43:36.181926   59210 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:43:36.181948   59210 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:43:36.182004   59210 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-572332 san=[127.0.0.1 192.168.72.73 kubernetes-upgrade-572332 localhost minikube]
	I0717 01:43:36.227671   59210 provision.go:177] copyRemoteCerts
	I0717 01:43:36.227719   59210 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:43:36.227743   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:36.230767   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:36.231199   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:36.231248   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:36.231372   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:36.231566   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:36.231741   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:36.231867   59210 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa Username:docker}
	I0717 01:43:36.330439   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:43:36.357664   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 01:43:36.383704   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:43:36.411316   59210 provision.go:87] duration metric: took 235.750281ms to configureAuth
	I0717 01:43:36.411349   59210 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:43:36.411515   59210 config.go:182] Loaded profile config "kubernetes-upgrade-572332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:43:36.411599   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:36.414121   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:36.414495   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:36.414525   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:36.414702   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:36.414899   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:36.415053   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:36.415205   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:36.415366   59210 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:36.415531   59210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:43:36.415546   59210 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:43:36.120536   57364 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:43:36.120594   57364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:43:36.135458   57364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:43:36.149356   57364 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:43:36.271013   57364 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:43:36.429046   57364 docker.go:233] disabling docker service ...
	I0717 01:43:36.429117   57364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:43:36.451419   57364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:43:36.464954   57364 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:43:36.621312   57364 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:43:36.745934   57364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:43:36.759335   57364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:43:36.777269   57364 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:43:36.777350   57364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:36.787546   57364 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:43:36.787647   57364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:36.797644   57364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:36.807741   57364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:36.819488   57364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:43:36.830235   57364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:36.840133   57364 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:36.857575   57364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:36.867803   57364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:43:36.876738   57364 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:43:36.876785   57364 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:43:36.888816   57364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:43:36.897840   57364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:43:37.017445   57364 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:43:37.153391   57364 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:43:37.153451   57364 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:43:37.158477   57364 start.go:563] Will wait 60s for crictl version
	I0717 01:43:37.158530   57364 ssh_runner.go:195] Run: which crictl
	I0717 01:43:37.162235   57364 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:43:37.201424   57364 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:43:37.201482   57364 ssh_runner.go:195] Run: crio --version
	I0717 01:43:37.229492   57364 ssh_runner.go:195] Run: crio --version
	I0717 01:43:37.258919   57364 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:43:34.176405   60250 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:43:34.176438   60250 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 01:43:34.176447   60250 cache.go:56] Caching tarball of preloaded images
	I0717 01:43:34.176532   60250 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:43:34.176543   60250 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 01:43:34.176634   60250 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/config.json ...
	I0717 01:43:34.176656   60250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/config.json: {Name:mk01ffa97fbb4dc59b6c8134b9b8b69cfeac8e13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:34.176815   60250 start.go:360] acquireMachinesLock for enable-default-cni-894370: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:43:37.260191   57364 main.go:141] libmachine: (calico-894370) Calling .GetIP
	I0717 01:43:37.262447   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:37.262863   57364 main.go:141] libmachine: (calico-894370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:f0:3d", ip: ""} in network mk-calico-894370: {Iface:virbr3 ExpiryTime:2024-07-17 02:43:26 +0000 UTC Type:0 Mac:52:54:00:6d:f0:3d Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:calico-894370 Clientid:01:52:54:00:6d:f0:3d}
	I0717 01:43:37.262893   57364 main.go:141] libmachine: (calico-894370) DBG | domain calico-894370 has defined IP address 192.168.39.194 and MAC address 52:54:00:6d:f0:3d in network mk-calico-894370
	I0717 01:43:37.263098   57364 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:43:37.267183   57364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:43:37.279324   57364 kubeadm.go:883] updating cluster {Name:calico-894370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:calico-894370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:43:37.279425   57364 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:43:37.279470   57364 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:43:37.314244   57364 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:43:37.314322   57364 ssh_runner.go:195] Run: which lz4
	I0717 01:43:37.318939   57364 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:43:37.322974   57364 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:43:37.323004   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:43:38.662350   57364 crio.go:462] duration metric: took 1.343442978s to copy over tarball
	I0717 01:43:38.662436   57364 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:43:40.828650   57364 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.166164555s)
	I0717 01:43:40.828698   57364 crio.go:469] duration metric: took 2.166312206s to extract the tarball
	I0717 01:43:40.828708   57364 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:43:40.875612   57364 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:43:40.917272   57364 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:43:40.917293   57364 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:43:40.917303   57364 kubeadm.go:934] updating node { 192.168.39.194 8443 v1.30.2 crio true true} ...
	I0717 01:43:40.917405   57364 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-894370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:calico-894370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0717 01:43:40.917466   57364 ssh_runner.go:195] Run: crio config
	I0717 01:43:40.965144   57364 cni.go:84] Creating CNI manager for "calico"
	I0717 01:43:40.965165   57364 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:43:40.965184   57364 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-894370 NodeName:calico-894370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:43:40.965317   57364 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-894370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:43:40.965367   57364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:43:40.975136   57364 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:43:40.975196   57364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:43:40.984160   57364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 01:43:41.000723   57364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:43:41.017120   57364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 01:43:41.032882   57364 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0717 01:43:41.036687   57364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.194	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:43:41.047755   57364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:43:41.169612   57364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:43:41.186851   57364 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370 for IP: 192.168.39.194
	I0717 01:43:41.186876   57364 certs.go:194] generating shared ca certs ...
	I0717 01:43:41.186894   57364 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:41.187048   57364 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:43:41.187101   57364 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:43:41.187112   57364 certs.go:256] generating profile certs ...
	I0717 01:43:41.187182   57364 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.key
	I0717 01:43:41.187196   57364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt with IP's: []
	I0717 01:43:41.369109   57364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt ...
	I0717 01:43:41.369138   57364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: {Name:mkb635b92ff7922b3130f55defa14909378a7ebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:41.369303   57364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.key ...
	I0717 01:43:41.369316   57364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.key: {Name:mk549f45f9bac4776fcfdae6e4214e3aeb50fd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:41.369398   57364 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.key.46d39f46
	I0717 01:43:41.369415   57364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.crt.46d39f46 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.194]
	I0717 01:43:41.455346   57364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.crt.46d39f46 ...
	I0717 01:43:41.455375   57364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.crt.46d39f46: {Name:mkee777fecd11ea906b89eaa4bf57bcb948b672c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:41.455520   57364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.key.46d39f46 ...
	I0717 01:43:41.455536   57364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.key.46d39f46: {Name:mkbf055ad865ccadb07e347036a66a90c4631b54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:41.455608   57364 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.crt.46d39f46 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.crt
	I0717 01:43:41.455790   57364 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.key.46d39f46 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.key
	I0717 01:43:41.455913   57364 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/proxy-client.key
	I0717 01:43:41.455935   57364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/proxy-client.crt with IP's: []
	I0717 01:43:41.616260   57364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/proxy-client.crt ...
	I0717 01:43:41.616286   57364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/proxy-client.crt: {Name:mk9cecc26b3df2c033eed5e79ce1016d7f8b010e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:41.616446   57364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/proxy-client.key ...
	I0717 01:43:41.616463   57364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/proxy-client.key: {Name:mk189f7b7f6eba246ad00dfda557a61e34fd3488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:41.616662   57364 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:43:41.616717   57364 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:43:41.616730   57364 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:43:41.616764   57364 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:43:41.616791   57364 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:43:41.616824   57364 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:43:41.616876   57364 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:43:41.617453   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:43:41.642543   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:43:41.669746   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:43:41.695887   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:43:41.723021   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 01:43:41.749071   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:43:41.773747   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:43:41.801629   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:43:41.825573   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:43:41.855518   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:43:41.890495   57364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:43:41.925740   57364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:43:41.951150   57364 ssh_runner.go:195] Run: openssl version
	I0717 01:43:41.959180   57364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:43:41.972515   57364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:43:41.977642   57364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:43:41.977705   57364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:43:41.983539   57364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:43:41.994114   57364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:43:42.005551   57364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:43:42.010330   57364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:43:42.010387   57364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:43:42.016281   57364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:43:42.026742   57364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:43:42.037134   57364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:43:42.042842   57364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:43:42.042903   57364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:43:42.050336   57364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:43:42.061955   57364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:43:42.066233   57364 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 01:43:42.066282   57364 kubeadm.go:392] StartCluster: {Name:calico-894370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:calico-894370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:43:42.066360   57364 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:43:42.066396   57364 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:43:42.103119   57364 cri.go:89] found id: ""
	I0717 01:43:42.103184   57364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:43:42.112841   57364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:43:42.124543   57364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:43:42.136322   57364 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:43:42.136343   57364 kubeadm.go:157] found existing configuration files:
	
	I0717 01:43:42.136389   57364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:43:42.148769   57364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:43:42.148831   57364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:43:42.158266   57364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:43:42.167148   57364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:43:42.167196   57364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:43:42.176107   57364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:43:42.184684   57364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:43:42.184734   57364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:43:42.194227   57364 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:43:42.203115   57364 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:43:42.203165   57364 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:43:42.213326   57364 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:43:42.273326   57364 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 01:43:42.273419   57364 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:43:42.442178   57364 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:43:42.442342   57364 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:43:42.442505   57364 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:43:42.688851   57364 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:43:43.075403   59269 start.go:364] duration metric: took 16.175363781s to acquireMachinesLock for "custom-flannel-894370"
	I0717 01:43:43.075482   59269 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-894370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-894370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:43:43.075609   59269 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:43:42.779695   57364 out.go:204]   - Generating certificates and keys ...
	I0717 01:43:42.779831   57364 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:43:42.779937   57364 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:43:42.851415   57364 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 01:43:43.002147   57364 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 01:43:43.115360   57364 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 01:43:43.200950   57364 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 01:43:43.385807   57364 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 01:43:43.386199   57364 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-894370 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0717 01:43:43.756042   57364 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 01:43:43.756376   57364 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-894370 localhost] and IPs [192.168.39.194 127.0.0.1 ::1]
	I0717 01:43:43.934800   57364 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 01:43:43.995776   57364 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 01:43:44.109329   57364 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 01:43:44.109736   57364 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:43:44.292596   57364 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:43:44.420551   57364 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 01:43:44.580527   57364 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:43:44.651292   57364 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:43:44.882118   57364 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:43:44.882930   57364 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:43:44.885398   57364 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:43:42.814038   59210 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:43:42.814066   59210 machine.go:97] duration metric: took 7.019415672s to provisionDockerMachine
	I0717 01:43:42.814079   59210 start.go:293] postStartSetup for "kubernetes-upgrade-572332" (driver="kvm2")
	I0717 01:43:42.814095   59210 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:43:42.814118   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:43:42.814456   59210 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:43:42.814503   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:42.817281   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:42.817683   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:42.817713   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:42.817843   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:42.818017   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:42.818166   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:42.818339   59210 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa Username:docker}
	I0717 01:43:42.909741   59210 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:43:42.914055   59210 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:43:42.914080   59210 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:43:42.914149   59210 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:43:42.914254   59210 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:43:42.914382   59210 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:43:42.927133   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:43:42.954755   59210 start.go:296] duration metric: took 140.663298ms for postStartSetup
	I0717 01:43:42.954810   59210 fix.go:56] duration metric: took 7.183125096s for fixHost
	I0717 01:43:42.954831   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:42.957229   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:42.957552   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:42.957576   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:42.957712   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:42.957884   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:42.958030   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:42.958182   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:42.958352   59210 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:42.958502   59210 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:43:42.958512   59210 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:43:43.075246   59210 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180623.054883301
	
	I0717 01:43:43.075273   59210 fix.go:216] guest clock: 1721180623.054883301
	I0717 01:43:43.075283   59210 fix.go:229] Guest: 2024-07-17 01:43:43.054883301 +0000 UTC Remote: 2024-07-17 01:43:42.954815636 +0000 UTC m=+16.389609599 (delta=100.067665ms)
	I0717 01:43:43.075307   59210 fix.go:200] guest clock delta is within tolerance: 100.067665ms
	I0717 01:43:43.075315   59210 start.go:83] releasing machines lock for "kubernetes-upgrade-572332", held for 7.303686554s
	I0717 01:43:43.075344   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:43:43.075638   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetIP
	I0717 01:43:43.078682   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:43.079063   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:43.079089   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:43.079258   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:43:43.079780   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:43:43.079977   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:43:43.080068   59210 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:43:43.080118   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:43.080229   59210 ssh_runner.go:195] Run: cat /version.json
	I0717 01:43:43.080263   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:43.083106   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:43.083412   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:43.083485   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:43.083510   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:43.083674   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:43.083905   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:43.083938   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:43.083962   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:43.084079   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:43.084166   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:43.084241   59210 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa Username:docker}
	I0717 01:43:43.084313   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:43.084458   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:43.084585   59210 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa Username:docker}
	I0717 01:43:43.172008   59210 ssh_runner.go:195] Run: systemctl --version
	I0717 01:43:43.203013   59210 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:43:43.358925   59210 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:43:43.369660   59210 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:43:43.369721   59210 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:43:43.379672   59210 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 01:43:43.379696   59210 start.go:495] detecting cgroup driver to use...
	I0717 01:43:43.379763   59210 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:43:43.397185   59210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:43:43.412127   59210 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:43:43.412186   59210 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:43:43.427005   59210 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:43:43.441893   59210 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:43:43.590004   59210 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:43:43.773634   59210 docker.go:233] disabling docker service ...
	I0717 01:43:43.773690   59210 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:43:43.798976   59210 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:43:43.814448   59210 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:43:43.995963   59210 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:43:44.180723   59210 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:43:44.217449   59210 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:43:44.251538   59210 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:43:44.251610   59210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:44.283116   59210 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:43:44.283164   59210 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:44.296970   59210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:44.307843   59210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:44.318851   59210 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:43:44.330008   59210 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:44.369900   59210 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:44.395587   59210 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:43:44.407170   59210 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:43:44.435887   59210 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:43:44.571204   59210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:43:44.956677   59210 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:43:45.537942   59210 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:43:45.538011   59210 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:43:45.543904   59210 start.go:563] Will wait 60s for crictl version
	I0717 01:43:45.543987   59210 ssh_runner.go:195] Run: which crictl
	I0717 01:43:45.549012   59210 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:43:45.591602   59210 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:43:45.591716   59210 ssh_runner.go:195] Run: crio --version
	I0717 01:43:45.622757   59210 ssh_runner.go:195] Run: crio --version
	I0717 01:43:45.656909   59210 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:43:44.962459   57364 out.go:204]   - Booting up control plane ...
	I0717 01:43:44.962642   57364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:43:44.962762   57364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:43:44.962856   57364 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:43:44.963009   57364 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:43:44.963124   57364 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:43:44.963185   57364 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:43:45.079585   57364 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 01:43:45.079708   57364 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 01:43:46.080490   57364 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001737898s
	I0717 01:43:46.080593   57364 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 01:43:45.658095   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetIP
	I0717 01:43:45.661009   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:45.661390   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:45.661426   59210 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:45.661652   59210 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:43:45.666269   59210 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-572332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-572332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:43:45.666398   59210 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:43:45.666452   59210 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:43:45.715558   59210 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:43:45.715584   59210 crio.go:433] Images already preloaded, skipping extraction
	I0717 01:43:45.715638   59210 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:43:45.753876   59210 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:43:45.753898   59210 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:43:45.753911   59210 kubeadm.go:934] updating node { 192.168.72.73 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:43:45.754015   59210 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-572332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-572332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:43:45.754081   59210 ssh_runner.go:195] Run: crio config
	I0717 01:43:45.805577   59210 cni.go:84] Creating CNI manager for ""
	I0717 01:43:45.805599   59210 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:43:45.805614   59210 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:43:45.805639   59210 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.73 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-572332 NodeName:kubernetes-upgrade-572332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs
/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:43:45.805799   59210 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-572332"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:43:45.805865   59210 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:43:45.822469   59210 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:43:45.822540   59210 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:43:45.832037   59210 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0717 01:43:45.850113   59210 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:43:45.868237   59210 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2173 bytes)
	I0717 01:43:45.889361   59210 ssh_runner.go:195] Run: grep 192.168.72.73	control-plane.minikube.internal$ /etc/hosts
	I0717 01:43:45.894711   59210 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:43:46.079547   59210 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:43:46.146025   59210 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332 for IP: 192.168.72.73
	I0717 01:43:46.146048   59210 certs.go:194] generating shared ca certs ...
	I0717 01:43:46.146068   59210 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:46.146243   59210 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:43:46.146299   59210 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:43:46.146313   59210 certs.go:256] generating profile certs ...
	I0717 01:43:46.146487   59210 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/client.key
	I0717 01:43:46.146575   59210 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.key.e1dd5c49
	I0717 01:43:46.146632   59210 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.key
	I0717 01:43:46.146780   59210 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:43:46.146826   59210 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:43:46.146839   59210 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:43:46.146872   59210 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:43:46.146908   59210 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:43:46.146946   59210 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:43:46.147016   59210 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:43:46.147863   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:43:46.274398   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:43:46.422160   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:43:46.538987   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:43:43.118084   59269 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0717 01:43:43.118328   59269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:43:43.118371   59269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:43:43.133109   59269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41781
	I0717 01:43:43.133513   59269 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:43:43.134023   59269 main.go:141] libmachine: Using API Version  1
	I0717 01:43:43.134057   59269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:43:43.134348   59269 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:43:43.134495   59269 main.go:141] libmachine: (custom-flannel-894370) Calling .GetMachineName
	I0717 01:43:43.134665   59269 main.go:141] libmachine: (custom-flannel-894370) Calling .DriverName
	I0717 01:43:43.134785   59269 start.go:159] libmachine.API.Create for "custom-flannel-894370" (driver="kvm2")
	I0717 01:43:43.134815   59269 client.go:168] LocalClient.Create starting
	I0717 01:43:43.134848   59269 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 01:43:43.134881   59269 main.go:141] libmachine: Decoding PEM data...
	I0717 01:43:43.134908   59269 main.go:141] libmachine: Parsing certificate...
	I0717 01:43:43.134967   59269 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 01:43:43.135001   59269 main.go:141] libmachine: Decoding PEM data...
	I0717 01:43:43.135017   59269 main.go:141] libmachine: Parsing certificate...
	I0717 01:43:43.135051   59269 main.go:141] libmachine: Running pre-create checks...
	I0717 01:43:43.135063   59269 main.go:141] libmachine: (custom-flannel-894370) Calling .PreCreateCheck
	I0717 01:43:43.135473   59269 main.go:141] libmachine: (custom-flannel-894370) Calling .GetConfigRaw
	I0717 01:43:43.135822   59269 main.go:141] libmachine: Creating machine...
	I0717 01:43:43.135835   59269 main.go:141] libmachine: (custom-flannel-894370) Calling .Create
	I0717 01:43:43.135955   59269 main.go:141] libmachine: (custom-flannel-894370) Creating KVM machine...
	I0717 01:43:43.136985   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | found existing default KVM network
	I0717 01:43:43.137968   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | I0717 01:43:43.137832   60339 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:54:6a:ab} reservation:<nil>}
	I0717 01:43:43.138798   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | I0717 01:43:43.138722   60339 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015f50}
	I0717 01:43:43.138815   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | created network xml: 
	I0717 01:43:43.138825   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | <network>
	I0717 01:43:43.138831   59269 main.go:141] libmachine: (custom-flannel-894370) DBG |   <name>mk-custom-flannel-894370</name>
	I0717 01:43:43.138836   59269 main.go:141] libmachine: (custom-flannel-894370) DBG |   <dns enable='no'/>
	I0717 01:43:43.138843   59269 main.go:141] libmachine: (custom-flannel-894370) DBG |   
	I0717 01:43:43.138853   59269 main.go:141] libmachine: (custom-flannel-894370) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0717 01:43:43.138865   59269 main.go:141] libmachine: (custom-flannel-894370) DBG |     <dhcp>
	I0717 01:43:43.138883   59269 main.go:141] libmachine: (custom-flannel-894370) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0717 01:43:43.138898   59269 main.go:141] libmachine: (custom-flannel-894370) DBG |     </dhcp>
	I0717 01:43:43.138909   59269 main.go:141] libmachine: (custom-flannel-894370) DBG |   </ip>
	I0717 01:43:43.138919   59269 main.go:141] libmachine: (custom-flannel-894370) DBG |   
	I0717 01:43:43.138925   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | </network>
	I0717 01:43:43.138932   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | 
	I0717 01:43:43.252157   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | trying to create private KVM network mk-custom-flannel-894370 192.168.50.0/24...
	I0717 01:43:43.325045   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | private KVM network mk-custom-flannel-894370 192.168.50.0/24 created
	I0717 01:43:43.325110   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | I0717 01:43:43.325027   60339 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:43:43.325133   59269 main.go:141] libmachine: (custom-flannel-894370) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/custom-flannel-894370 ...
	I0717 01:43:43.325154   59269 main.go:141] libmachine: (custom-flannel-894370) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 01:43:43.325178   59269 main.go:141] libmachine: (custom-flannel-894370) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 01:43:43.553630   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | I0717 01:43:43.553533   60339 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/custom-flannel-894370/id_rsa...
	I0717 01:43:43.650956   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | I0717 01:43:43.650834   60339 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/custom-flannel-894370/custom-flannel-894370.rawdisk...
	I0717 01:43:43.651004   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | Writing magic tar header
	I0717 01:43:43.651021   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | Writing SSH key tar header
	I0717 01:43:43.651036   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | I0717 01:43:43.650962   60339 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/custom-flannel-894370 ...
	I0717 01:43:43.651092   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/custom-flannel-894370
	I0717 01:43:43.651131   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 01:43:43.651146   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:43:43.651160   59269 main.go:141] libmachine: (custom-flannel-894370) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/custom-flannel-894370 (perms=drwx------)
	I0717 01:43:43.651177   59269 main.go:141] libmachine: (custom-flannel-894370) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 01:43:43.651190   59269 main.go:141] libmachine: (custom-flannel-894370) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 01:43:43.651205   59269 main.go:141] libmachine: (custom-flannel-894370) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 01:43:43.651226   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 01:43:43.651238   59269 main.go:141] libmachine: (custom-flannel-894370) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 01:43:43.651263   59269 main.go:141] libmachine: (custom-flannel-894370) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 01:43:43.651274   59269 main.go:141] libmachine: (custom-flannel-894370) Creating domain...
	I0717 01:43:43.651341   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 01:43:43.651365   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | Checking permissions on dir: /home/jenkins
	I0717 01:43:43.651376   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | Checking permissions on dir: /home
	I0717 01:43:43.651384   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | Skipping /home - not owner
	I0717 01:43:43.652413   59269 main.go:141] libmachine: (custom-flannel-894370) define libvirt domain using xml: 
	I0717 01:43:43.652437   59269 main.go:141] libmachine: (custom-flannel-894370) <domain type='kvm'>
	I0717 01:43:43.652458   59269 main.go:141] libmachine: (custom-flannel-894370)   <name>custom-flannel-894370</name>
	I0717 01:43:43.652471   59269 main.go:141] libmachine: (custom-flannel-894370)   <memory unit='MiB'>3072</memory>
	I0717 01:43:43.652481   59269 main.go:141] libmachine: (custom-flannel-894370)   <vcpu>2</vcpu>
	I0717 01:43:43.652489   59269 main.go:141] libmachine: (custom-flannel-894370)   <features>
	I0717 01:43:43.652498   59269 main.go:141] libmachine: (custom-flannel-894370)     <acpi/>
	I0717 01:43:43.652512   59269 main.go:141] libmachine: (custom-flannel-894370)     <apic/>
	I0717 01:43:43.652527   59269 main.go:141] libmachine: (custom-flannel-894370)     <pae/>
	I0717 01:43:43.652536   59269 main.go:141] libmachine: (custom-flannel-894370)     
	I0717 01:43:43.652545   59269 main.go:141] libmachine: (custom-flannel-894370)   </features>
	I0717 01:43:43.652557   59269 main.go:141] libmachine: (custom-flannel-894370)   <cpu mode='host-passthrough'>
	I0717 01:43:43.652566   59269 main.go:141] libmachine: (custom-flannel-894370)   
	I0717 01:43:43.652580   59269 main.go:141] libmachine: (custom-flannel-894370)   </cpu>
	I0717 01:43:43.652591   59269 main.go:141] libmachine: (custom-flannel-894370)   <os>
	I0717 01:43:43.652599   59269 main.go:141] libmachine: (custom-flannel-894370)     <type>hvm</type>
	I0717 01:43:43.652608   59269 main.go:141] libmachine: (custom-flannel-894370)     <boot dev='cdrom'/>
	I0717 01:43:43.652618   59269 main.go:141] libmachine: (custom-flannel-894370)     <boot dev='hd'/>
	I0717 01:43:43.652628   59269 main.go:141] libmachine: (custom-flannel-894370)     <bootmenu enable='no'/>
	I0717 01:43:43.652645   59269 main.go:141] libmachine: (custom-flannel-894370)   </os>
	I0717 01:43:43.652675   59269 main.go:141] libmachine: (custom-flannel-894370)   <devices>
	I0717 01:43:43.652693   59269 main.go:141] libmachine: (custom-flannel-894370)     <disk type='file' device='cdrom'>
	I0717 01:43:43.652710   59269 main.go:141] libmachine: (custom-flannel-894370)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/custom-flannel-894370/boot2docker.iso'/>
	I0717 01:43:43.652726   59269 main.go:141] libmachine: (custom-flannel-894370)       <target dev='hdc' bus='scsi'/>
	I0717 01:43:43.652739   59269 main.go:141] libmachine: (custom-flannel-894370)       <readonly/>
	I0717 01:43:43.652749   59269 main.go:141] libmachine: (custom-flannel-894370)     </disk>
	I0717 01:43:43.652761   59269 main.go:141] libmachine: (custom-flannel-894370)     <disk type='file' device='disk'>
	I0717 01:43:43.652774   59269 main.go:141] libmachine: (custom-flannel-894370)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 01:43:43.652791   59269 main.go:141] libmachine: (custom-flannel-894370)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/custom-flannel-894370/custom-flannel-894370.rawdisk'/>
	I0717 01:43:43.652805   59269 main.go:141] libmachine: (custom-flannel-894370)       <target dev='hda' bus='virtio'/>
	I0717 01:43:43.652819   59269 main.go:141] libmachine: (custom-flannel-894370)     </disk>
	I0717 01:43:43.652830   59269 main.go:141] libmachine: (custom-flannel-894370)     <interface type='network'>
	I0717 01:43:43.652840   59269 main.go:141] libmachine: (custom-flannel-894370)       <source network='mk-custom-flannel-894370'/>
	I0717 01:43:43.652850   59269 main.go:141] libmachine: (custom-flannel-894370)       <model type='virtio'/>
	I0717 01:43:43.652858   59269 main.go:141] libmachine: (custom-flannel-894370)     </interface>
	I0717 01:43:43.652868   59269 main.go:141] libmachine: (custom-flannel-894370)     <interface type='network'>
	I0717 01:43:43.652877   59269 main.go:141] libmachine: (custom-flannel-894370)       <source network='default'/>
	I0717 01:43:43.652884   59269 main.go:141] libmachine: (custom-flannel-894370)       <model type='virtio'/>
	I0717 01:43:43.652895   59269 main.go:141] libmachine: (custom-flannel-894370)     </interface>
	I0717 01:43:43.652906   59269 main.go:141] libmachine: (custom-flannel-894370)     <serial type='pty'>
	I0717 01:43:43.652917   59269 main.go:141] libmachine: (custom-flannel-894370)       <target port='0'/>
	I0717 01:43:43.652927   59269 main.go:141] libmachine: (custom-flannel-894370)     </serial>
	I0717 01:43:43.652935   59269 main.go:141] libmachine: (custom-flannel-894370)     <console type='pty'>
	I0717 01:43:43.652946   59269 main.go:141] libmachine: (custom-flannel-894370)       <target type='serial' port='0'/>
	I0717 01:43:43.652956   59269 main.go:141] libmachine: (custom-flannel-894370)     </console>
	I0717 01:43:43.652963   59269 main.go:141] libmachine: (custom-flannel-894370)     <rng model='virtio'>
	I0717 01:43:43.652972   59269 main.go:141] libmachine: (custom-flannel-894370)       <backend model='random'>/dev/random</backend>
	I0717 01:43:43.652989   59269 main.go:141] libmachine: (custom-flannel-894370)     </rng>
	I0717 01:43:43.653001   59269 main.go:141] libmachine: (custom-flannel-894370)     
	I0717 01:43:43.653010   59269 main.go:141] libmachine: (custom-flannel-894370)     
	I0717 01:43:43.653018   59269 main.go:141] libmachine: (custom-flannel-894370)   </devices>
	I0717 01:43:43.653028   59269 main.go:141] libmachine: (custom-flannel-894370) </domain>
	I0717 01:43:43.653038   59269 main.go:141] libmachine: (custom-flannel-894370) 
	I0717 01:43:43.832644   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | domain custom-flannel-894370 has defined MAC address 52:54:00:bd:7a:8d in network default
	I0717 01:43:43.833291   59269 main.go:141] libmachine: (custom-flannel-894370) Ensuring networks are active...
	I0717 01:43:43.833323   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | domain custom-flannel-894370 has defined MAC address 52:54:00:54:96:36 in network mk-custom-flannel-894370
	I0717 01:43:43.834019   59269 main.go:141] libmachine: (custom-flannel-894370) Ensuring network default is active
	I0717 01:43:43.834448   59269 main.go:141] libmachine: (custom-flannel-894370) Ensuring network mk-custom-flannel-894370 is active
	I0717 01:43:43.835063   59269 main.go:141] libmachine: (custom-flannel-894370) Getting domain xml...
	I0717 01:43:43.835884   59269 main.go:141] libmachine: (custom-flannel-894370) Creating domain...
	I0717 01:43:45.592115   59269 main.go:141] libmachine: (custom-flannel-894370) Waiting to get IP...
	I0717 01:43:45.593101   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | domain custom-flannel-894370 has defined MAC address 52:54:00:54:96:36 in network mk-custom-flannel-894370
	I0717 01:43:45.593565   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | unable to find current IP address of domain custom-flannel-894370 in network mk-custom-flannel-894370
	I0717 01:43:45.593587   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | I0717 01:43:45.593545   60339 retry.go:31] will retry after 299.272863ms: waiting for machine to come up
	I0717 01:43:45.894044   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | domain custom-flannel-894370 has defined MAC address 52:54:00:54:96:36 in network mk-custom-flannel-894370
	I0717 01:43:45.894599   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | unable to find current IP address of domain custom-flannel-894370 in network mk-custom-flannel-894370
	I0717 01:43:45.894628   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | I0717 01:43:45.894506   60339 retry.go:31] will retry after 368.669241ms: waiting for machine to come up
	I0717 01:43:46.264671   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | domain custom-flannel-894370 has defined MAC address 52:54:00:54:96:36 in network mk-custom-flannel-894370
	I0717 01:43:46.265176   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | unable to find current IP address of domain custom-flannel-894370 in network mk-custom-flannel-894370
	I0717 01:43:46.265206   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | I0717 01:43:46.265138   60339 retry.go:31] will retry after 300.871578ms: waiting for machine to come up
	I0717 01:43:46.567685   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | domain custom-flannel-894370 has defined MAC address 52:54:00:54:96:36 in network mk-custom-flannel-894370
	I0717 01:43:46.568331   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | unable to find current IP address of domain custom-flannel-894370 in network mk-custom-flannel-894370
	I0717 01:43:46.568354   59269 main.go:141] libmachine: (custom-flannel-894370) DBG | I0717 01:43:46.568275   60339 retry.go:31] will retry after 396.289194ms: waiting for machine to come up
	I0717 01:43:46.690128   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 01:43:46.866109   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:43:46.935844   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:43:46.967328   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:43:46.997032   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:43:47.027304   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:43:47.059138   59210 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:43:47.106798   59210 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:43:47.209482   59210 ssh_runner.go:195] Run: openssl version
	I0717 01:43:47.217488   59210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:43:47.259265   59210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:43:47.269207   59210 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:43:47.269275   59210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:43:47.290148   59210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:43:47.311372   59210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:43:47.340893   59210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:43:47.346526   59210 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:43:47.346603   59210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:43:47.371156   59210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:43:47.388799   59210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:43:47.403623   59210 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:43:47.408326   59210 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:43:47.408382   59210 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:43:47.420137   59210 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:43:47.430257   59210 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:43:47.435233   59210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:43:47.441467   59210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:43:47.447322   59210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:43:47.453259   59210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:43:47.458929   59210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:43:47.465774   59210 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:43:47.471483   59210 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-572332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-572332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:43:47.471584   59210 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:43:47.471646   59210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:43:47.529632   59210 cri.go:89] found id: "5966c8000b93b1c0962b84814e2110437e486e84ecdf442d38541874a44873be"
	I0717 01:43:47.529705   59210 cri.go:89] found id: "c3e5ec84c8d794a27992b18e1ce9bb2898c8c69114e1d088098933b9bf5b7b1e"
	I0717 01:43:47.529715   59210 cri.go:89] found id: "48340398920c2600b89850ae66b7f9a93f7cb12cd0ea47452bd8ca89566c3529"
	I0717 01:43:47.529720   59210 cri.go:89] found id: "35b6fe9631f123ce54faf52fbdb9334efff1af9648656e141548e2d9aebe0b66"
	I0717 01:43:47.529725   59210 cri.go:89] found id: "d17a581414bad11fca362a486a2564b74e7b38cec46d74380831f213bd4d0640"
	I0717 01:43:47.529729   59210 cri.go:89] found id: "ed46e4b3937b1defb6500b7e739e2e6405eaba172b5f7257a39ad610e27046ec"
	I0717 01:43:47.529734   59210 cri.go:89] found id: "cd07d30e735362109b4e7c6e0d60bbcc9734e834a59c22890a925a8d92c7a165"
	I0717 01:43:47.529738   59210 cri.go:89] found id: "829fff2479d2758ba399becaeff3d243eca612aea1c3e31f135abba261b67f2b"
	I0717 01:43:47.529742   59210 cri.go:89] found id: "f7901b4a241114b8fc035db88fb801356b5294c17c100c7caebb891ee9c19454"
	I0717 01:43:47.529750   59210 cri.go:89] found id: "25053f56189ccd2d9eac01f472bfdba81e90db01530c9aa5ffeadad93ce5b77c"
	I0717 01:43:47.529758   59210 cri.go:89] found id: "bb2c9734ddc9d8333ca05a68f831c58e689204836c5df03af894c2ea55d84293"
	I0717 01:43:47.529762   59210 cri.go:89] found id: "982aa7c4a6779d2ac9581dd83c58d5eb534bdc69419070f743d683e114b68ad3"
	I0717 01:43:47.529767   59210 cri.go:89] found id: ""
	I0717 01:43:47.529813   59210 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.905976732Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=050e896c-fcfe-49f3-841c-e8393695afd6 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.907522688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cd79233-dc59-4749-8df2-4e020ff70540 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.908957591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180637908931773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cd79233-dc59-4749-8df2-4e020ff70540 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.909783477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8054377a-e15c-4f50-833b-b6e8188b81c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.909896326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8054377a-e15c-4f50-833b-b6e8188b81c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.910344198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:223f3073aeaec4654e9ab6df8708898e22b14ef6a58eb22ee9488e9cc70d41db,PodSandboxId:0290de3f7f6aedac9dbaf2a2fcf79367094b0026ccdb2d39875d9ae102cb9a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180634842831599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-f92d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c3484f-0d59-43e1-8869-616c298ee124,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04060564b86a5a897513fbe6c790be8f8d441de142d1222f76d0ce4782577978,PodSandboxId:7b7322bba73194467a8cd9b016794597be7ee55f11c6c364f179ae3a64e9858b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180634763665120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-s9lh4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b364f161-0c01-46ad-9afc-c414a6c5e78e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22dbd66eadb0f9650ec3edc90f7644500e02d768b15c28da609203d385beb5b5,PodSandboxId:91422a7eb17947ccc87457baa355f3fa28dda581fe860870340f7a6dbf32ac4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAIN
ER_RUNNING,CreatedAt:1721180634127003140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn9q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34eb26d449b6c082d26c56e412eeecd2df3d62c23e7d28ff6eef881d307a1eb,PodSandboxId:d355d833213a7c9281d0079f64ff33ead862f91ba60e9ae32902e15994f1af01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
1180634077265651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812bcb2f-554a-4dcf-a663-9e56a8fb0d91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737e35cedcfbb4949b607cfd9e9577ce2f53cdd962fd54cc074fbc90282ee956,PodSandboxId:042ad510d67506c3512c30ac8e237ab12c4343072e2e299a8732f21bffee4fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180630318373062,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c61b5fa1356caa02af5661d28d2740c,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74ee2f9e65a6f30cc96ad4b1155f36491d949aac8ee6c20dd804130dbe6414d2,PodSandboxId:dadd23ae90aab8875905b6c38336bc4a10faad6576c746595219c7fb0c39612f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180630299508457,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b44cf8ecf986c0cee7675f3968d7e091,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be0c07de111877b9ba86fb5553a578f611dd5bcc911f3a4a9fe7c471d8707a,PodSandboxId:4e49888c39df1b1d28af044a79e67516e80fb030351dd89d05b15221ce9d6c17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180630282919614,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aebec6d072e764be935c41f6bd09e2d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ef3f3879d730c3633847746ed042965d2b610156fe27d66410e84fb21bdeb31,PodSandboxId:f0a418b11ae8ced5596432d6bfc4dfcbd1b74a5747a92a1476a8c902aa04e172,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180630302685123,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f84b6ebb8a2628292f6b6dcb4d2e22a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5966c8000b93b1c0962b84814e2110437e486e84ecdf442d38541874a44873be,PodSandboxId:91422a7eb17947ccc87457baa355f3fa28dda581fe860870340f7a6dbf32ac4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721180626660675075,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn9q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3e5ec84c8d794a27992b18e1ce9bb2898c8c69114e1d088098933b9bf5b7b1e,PodSandboxId:d355d833213a7c9281d0079f64ff33ead862f91ba60e9ae32902e15994f1af01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180626628199594,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812bcb2f-554a-4dcf-a663-9e56a8fb0d91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48340398920c2600b89850ae66b7f9a93f7cb12cd0ea47452bd8ca89566c3529,PodSandboxId:042ad510d67506c3512c30ac8e237ab12c4343072e2e299a8732f21bffee4fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721180626543371974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c61b5fa1356caa02af5661d28d2740c,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b6fe9631f123ce54faf52fbdb9334efff1af9648656e141548e2d9aebe0b66,PodSandboxId:f0a418b11ae8ced5596432d6bfc4dfcbd1b74a5747a92a1476a8c902aa04e172,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721180626419747665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f84b6ebb8a2628292f6b6dcb4d2e22a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17a581414bad11fca362a486a2564b74e7b38cec46d74380831f213bd4d0640,PodSandboxId:209ef031e4b10dc89b84fad34b87e438ca32aea7fd33ad68763caeaa212887b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721180624975131076,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.
name: kube-apiserver-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aebec6d072e764be935c41f6bd09e2d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed46e4b3937b1defb6500b7e739e2e6405eaba172b5f7257a39ad610e27046ec,PodSandboxId:1073d8eaccc3f230bf84ebd8a9ae97ccd93b0b6e550c6608d01da3058a3eec92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721180624496460157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b44cf8ecf986c0cee7675f3968d7e091,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07d30e735362109b4e7c6e0d60bbcc9734e834a59c22890a925a8d92c7a165,PodSandboxId:4cbfb449f1fdd933d86f03407299c7abbdcfefa4d9a47f2aabf0aa7626d2a7db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180610225929861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-s
9lh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b364f161-0c01-46ad-9afc-c414a6c5e78e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829fff2479d2758ba399becaeff3d243eca612aea1c3e31f135abba261b67f2b,PodSandboxId:c177c24fc0433bb890e925b6617b70608f9d6a91e24fe5db3549b8b021622500,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180610216840259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-f92d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c3484f-0d59-43e1-8869-616c298ee124,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8054377a-e15c-4f50-833b-b6e8188b81c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.951532220Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6b0205f-eecf-4d26-aa5e-b05f1fe38db4 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.951605789Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6b0205f-eecf-4d26-aa5e-b05f1fe38db4 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.952776173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48d8a03f-4e7c-4f42-b4c9-537bb79642a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.953119767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180637953099743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48d8a03f-4e7c-4f42-b4c9-537bb79642a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.953643976Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=426c37f2-8194-436b-adb1-5e52fa02f3c0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.953699198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=426c37f2-8194-436b-adb1-5e52fa02f3c0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.954045132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:223f3073aeaec4654e9ab6df8708898e22b14ef6a58eb22ee9488e9cc70d41db,PodSandboxId:0290de3f7f6aedac9dbaf2a2fcf79367094b0026ccdb2d39875d9ae102cb9a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180634842831599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-f92d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c3484f-0d59-43e1-8869-616c298ee124,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04060564b86a5a897513fbe6c790be8f8d441de142d1222f76d0ce4782577978,PodSandboxId:7b7322bba73194467a8cd9b016794597be7ee55f11c6c364f179ae3a64e9858b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180634763665120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-s9lh4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b364f161-0c01-46ad-9afc-c414a6c5e78e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22dbd66eadb0f9650ec3edc90f7644500e02d768b15c28da609203d385beb5b5,PodSandboxId:91422a7eb17947ccc87457baa355f3fa28dda581fe860870340f7a6dbf32ac4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAIN
ER_RUNNING,CreatedAt:1721180634127003140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn9q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34eb26d449b6c082d26c56e412eeecd2df3d62c23e7d28ff6eef881d307a1eb,PodSandboxId:d355d833213a7c9281d0079f64ff33ead862f91ba60e9ae32902e15994f1af01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
1180634077265651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812bcb2f-554a-4dcf-a663-9e56a8fb0d91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737e35cedcfbb4949b607cfd9e9577ce2f53cdd962fd54cc074fbc90282ee956,PodSandboxId:042ad510d67506c3512c30ac8e237ab12c4343072e2e299a8732f21bffee4fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180630318373062,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c61b5fa1356caa02af5661d28d2740c,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74ee2f9e65a6f30cc96ad4b1155f36491d949aac8ee6c20dd804130dbe6414d2,PodSandboxId:dadd23ae90aab8875905b6c38336bc4a10faad6576c746595219c7fb0c39612f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180630299508457,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b44cf8ecf986c0cee7675f3968d7e091,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be0c07de111877b9ba86fb5553a578f611dd5bcc911f3a4a9fe7c471d8707a,PodSandboxId:4e49888c39df1b1d28af044a79e67516e80fb030351dd89d05b15221ce9d6c17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180630282919614,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aebec6d072e764be935c41f6bd09e2d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ef3f3879d730c3633847746ed042965d2b610156fe27d66410e84fb21bdeb31,PodSandboxId:f0a418b11ae8ced5596432d6bfc4dfcbd1b74a5747a92a1476a8c902aa04e172,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180630302685123,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f84b6ebb8a2628292f6b6dcb4d2e22a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5966c8000b93b1c0962b84814e2110437e486e84ecdf442d38541874a44873be,PodSandboxId:91422a7eb17947ccc87457baa355f3fa28dda581fe860870340f7a6dbf32ac4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721180626660675075,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn9q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3e5ec84c8d794a27992b18e1ce9bb2898c8c69114e1d088098933b9bf5b7b1e,PodSandboxId:d355d833213a7c9281d0079f64ff33ead862f91ba60e9ae32902e15994f1af01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180626628199594,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812bcb2f-554a-4dcf-a663-9e56a8fb0d91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48340398920c2600b89850ae66b7f9a93f7cb12cd0ea47452bd8ca89566c3529,PodSandboxId:042ad510d67506c3512c30ac8e237ab12c4343072e2e299a8732f21bffee4fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721180626543371974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c61b5fa1356caa02af5661d28d2740c,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b6fe9631f123ce54faf52fbdb9334efff1af9648656e141548e2d9aebe0b66,PodSandboxId:f0a418b11ae8ced5596432d6bfc4dfcbd1b74a5747a92a1476a8c902aa04e172,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721180626419747665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f84b6ebb8a2628292f6b6dcb4d2e22a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17a581414bad11fca362a486a2564b74e7b38cec46d74380831f213bd4d0640,PodSandboxId:209ef031e4b10dc89b84fad34b87e438ca32aea7fd33ad68763caeaa212887b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721180624975131076,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.
name: kube-apiserver-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aebec6d072e764be935c41f6bd09e2d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed46e4b3937b1defb6500b7e739e2e6405eaba172b5f7257a39ad610e27046ec,PodSandboxId:1073d8eaccc3f230bf84ebd8a9ae97ccd93b0b6e550c6608d01da3058a3eec92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721180624496460157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b44cf8ecf986c0cee7675f3968d7e091,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07d30e735362109b4e7c6e0d60bbcc9734e834a59c22890a925a8d92c7a165,PodSandboxId:4cbfb449f1fdd933d86f03407299c7abbdcfefa4d9a47f2aabf0aa7626d2a7db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180610225929861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-s
9lh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b364f161-0c01-46ad-9afc-c414a6c5e78e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829fff2479d2758ba399becaeff3d243eca612aea1c3e31f135abba261b67f2b,PodSandboxId:c177c24fc0433bb890e925b6617b70608f9d6a91e24fe5db3549b8b021622500,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180610216840259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-f92d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c3484f-0d59-43e1-8869-616c298ee124,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=426c37f2-8194-436b-adb1-5e52fa02f3c0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.988072057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=912adbed-7798-4fd6-8f46-413d401c2eaa name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.988149365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=912adbed-7798-4fd6-8f46-413d401c2eaa name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.989086028Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b69ab8c-fe22-4aeb-94e3-18130a9fa236 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.989597946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180637989573197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b69ab8c-fe22-4aeb-94e3-18130a9fa236 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.990164158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15845522-ca79-4c67-ae16-c589850c97ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.990216948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15845522-ca79-4c67-ae16-c589850c97ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:57 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:57.990921972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:223f3073aeaec4654e9ab6df8708898e22b14ef6a58eb22ee9488e9cc70d41db,PodSandboxId:0290de3f7f6aedac9dbaf2a2fcf79367094b0026ccdb2d39875d9ae102cb9a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180634842831599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-f92d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c3484f-0d59-43e1-8869-616c298ee124,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04060564b86a5a897513fbe6c790be8f8d441de142d1222f76d0ce4782577978,PodSandboxId:7b7322bba73194467a8cd9b016794597be7ee55f11c6c364f179ae3a64e9858b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180634763665120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-s9lh4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b364f161-0c01-46ad-9afc-c414a6c5e78e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22dbd66eadb0f9650ec3edc90f7644500e02d768b15c28da609203d385beb5b5,PodSandboxId:91422a7eb17947ccc87457baa355f3fa28dda581fe860870340f7a6dbf32ac4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAIN
ER_RUNNING,CreatedAt:1721180634127003140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn9q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34eb26d449b6c082d26c56e412eeecd2df3d62c23e7d28ff6eef881d307a1eb,PodSandboxId:d355d833213a7c9281d0079f64ff33ead862f91ba60e9ae32902e15994f1af01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
1180634077265651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812bcb2f-554a-4dcf-a663-9e56a8fb0d91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737e35cedcfbb4949b607cfd9e9577ce2f53cdd962fd54cc074fbc90282ee956,PodSandboxId:042ad510d67506c3512c30ac8e237ab12c4343072e2e299a8732f21bffee4fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180630318373062,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c61b5fa1356caa02af5661d28d2740c,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74ee2f9e65a6f30cc96ad4b1155f36491d949aac8ee6c20dd804130dbe6414d2,PodSandboxId:dadd23ae90aab8875905b6c38336bc4a10faad6576c746595219c7fb0c39612f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180630299508457,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b44cf8ecf986c0cee7675f3968d7e091,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be0c07de111877b9ba86fb5553a578f611dd5bcc911f3a4a9fe7c471d8707a,PodSandboxId:4e49888c39df1b1d28af044a79e67516e80fb030351dd89d05b15221ce9d6c17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180630282919614,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aebec6d072e764be935c41f6bd09e2d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ef3f3879d730c3633847746ed042965d2b610156fe27d66410e84fb21bdeb31,PodSandboxId:f0a418b11ae8ced5596432d6bfc4dfcbd1b74a5747a92a1476a8c902aa04e172,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180630302685123,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f84b6ebb8a2628292f6b6dcb4d2e22a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5966c8000b93b1c0962b84814e2110437e486e84ecdf442d38541874a44873be,PodSandboxId:91422a7eb17947ccc87457baa355f3fa28dda581fe860870340f7a6dbf32ac4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721180626660675075,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn9q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3e5ec84c8d794a27992b18e1ce9bb2898c8c69114e1d088098933b9bf5b7b1e,PodSandboxId:d355d833213a7c9281d0079f64ff33ead862f91ba60e9ae32902e15994f1af01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180626628199594,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812bcb2f-554a-4dcf-a663-9e56a8fb0d91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48340398920c2600b89850ae66b7f9a93f7cb12cd0ea47452bd8ca89566c3529,PodSandboxId:042ad510d67506c3512c30ac8e237ab12c4343072e2e299a8732f21bffee4fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721180626543371974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c61b5fa1356caa02af5661d28d2740c,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b6fe9631f123ce54faf52fbdb9334efff1af9648656e141548e2d9aebe0b66,PodSandboxId:f0a418b11ae8ced5596432d6bfc4dfcbd1b74a5747a92a1476a8c902aa04e172,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721180626419747665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f84b6ebb8a2628292f6b6dcb4d2e22a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17a581414bad11fca362a486a2564b74e7b38cec46d74380831f213bd4d0640,PodSandboxId:209ef031e4b10dc89b84fad34b87e438ca32aea7fd33ad68763caeaa212887b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721180624975131076,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.
name: kube-apiserver-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aebec6d072e764be935c41f6bd09e2d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed46e4b3937b1defb6500b7e739e2e6405eaba172b5f7257a39ad610e27046ec,PodSandboxId:1073d8eaccc3f230bf84ebd8a9ae97ccd93b0b6e550c6608d01da3058a3eec92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721180624496460157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b44cf8ecf986c0cee7675f3968d7e091,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07d30e735362109b4e7c6e0d60bbcc9734e834a59c22890a925a8d92c7a165,PodSandboxId:4cbfb449f1fdd933d86f03407299c7abbdcfefa4d9a47f2aabf0aa7626d2a7db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180610225929861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-s
9lh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b364f161-0c01-46ad-9afc-c414a6c5e78e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829fff2479d2758ba399becaeff3d243eca612aea1c3e31f135abba261b67f2b,PodSandboxId:c177c24fc0433bb890e925b6617b70608f9d6a91e24fe5db3549b8b021622500,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180610216840259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-f92d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c3484f-0d59-43e1-8869-616c298ee124,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15845522-ca79-4c67-ae16-c589850c97ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:58 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:58.013977301Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=760fcbb3-e346-4761-9c5b-9f0810855865 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 01:43:58 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:58.014690955Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7b7322bba73194467a8cd9b016794597be7ee55f11c6c364f179ae3a64e9858b,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-s9lh4,Uid:b364f161-0c01-46ad-9afc-c414a6c5e78e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721180634100823669,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-s9lh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b364f161-0c01-46ad-9afc-c414a6c5e78e,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:43:53.750759454Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0290de3f7f6aedac9dbaf2a2fcf79367094b0026ccdb2d39875d9ae102cb9a92,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-f92d8,Uid:91c3484f-0d59-43e1-8869-616c298ee124,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721180634098438482,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-f92d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c3484f-0d59-43e1-8869-616c298ee124,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:43:53.750675439Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:042ad510d67506c3512c30ac8e237ab12c4343072e2e299a8732f21bffee4fcf,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-572332,Uid:1c61b5fa1356caa02af5661d28d2740c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721180626177167658,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c61b5fa1356caa02af5661d28d2740c,tier: control-plane,},Annotations:map[string]string{kub
eadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.73:2379,kubernetes.io/config.hash: 1c61b5fa1356caa02af5661d28d2740c,kubernetes.io/config.seen: 2024-07-17T01:43:15.686116353Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f0a418b11ae8ced5596432d6bfc4dfcbd1b74a5747a92a1476a8c902aa04e172,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-572332,Uid:4f84b6ebb8a2628292f6b6dcb4d2e22a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721180626165365851,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f84b6ebb8a2628292f6b6dcb4d2e22a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4f84b6ebb8a2628292f6b6dcb4d2e22a,kubernetes.io/config.seen: 2024-07-17T01:43:15.626855298Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&Pod
Sandbox{Id:dadd23ae90aab8875905b6c38336bc4a10faad6576c746595219c7fb0c39612f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-572332,Uid:b44cf8ecf986c0cee7675f3968d7e091,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721180626163730487,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b44cf8ecf986c0cee7675f3968d7e091,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b44cf8ecf986c0cee7675f3968d7e091,kubernetes.io/config.seen: 2024-07-17T01:43:15.626856616Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:91422a7eb17947ccc87457baa355f3fa28dda581fe860870340f7a6dbf32ac4b,Metadata:&PodSandboxMetadata{Name:kube-proxy-pn9q6,Uid:a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721180626139145865,Labels:map[string]string{
controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pn9q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:43:28.054421802Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d355d833213a7c9281d0079f64ff33ead862f91ba60e9ae32902e15994f1af01,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:812bcb2f-554a-4dcf-a663-9e56a8fb0d91,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721180626062177615,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812bcb2f-554a-4dcf-a663-9e56a8fb0d91,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configurat
ion: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T01:43:28.094104885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4e49888c39df1b1d28af044a79e67516e80fb030351dd89d05b15221ce9d6c17,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-572332,Uid:3aebec6d072e764be935c41f6bd09e2d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,
CreatedAt:1721180626014259022,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aebec6d072e764be935c41f6bd09e2d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.73:8443,kubernetes.io/config.hash: 3aebec6d072e764be935c41f6bd09e2d,kubernetes.io/config.seen: 2024-07-17T01:43:15.626850223Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:209ef031e4b10dc89b84fad34b87e438ca32aea7fd33ad68763caeaa212887b4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-572332,Uid:3aebec6d072e764be935c41f6bd09e2d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1721180624427537188,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-572332,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aebec6d072e764be935c41f6bd09e2d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.73:8443,kubernetes.io/config.hash: 3aebec6d072e764be935c41f6bd09e2d,kubernetes.io/config.seen: 2024-07-17T01:43:15.626850223Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1073d8eaccc3f230bf84ebd8a9ae97ccd93b0b6e550c6608d01da3058a3eec92,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-572332,Uid:b44cf8ecf986c0cee7675f3968d7e091,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1721180624168256703,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b44cf8ecf986c0cee7675f3968d7e091,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b44cf8ecf986c0cee7675f39
68d7e091,kubernetes.io/config.seen: 2024-07-17T01:43:15.626856616Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4cbfb449f1fdd933d86f03407299c7abbdcfefa4d9a47f2aabf0aa7626d2a7db,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-s9lh4,Uid:b364f161-0c01-46ad-9afc-c414a6c5e78e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721180609938046231,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-s9lh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b364f161-0c01-46ad-9afc-c414a6c5e78e,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:43:29.620967336Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c177c24fc0433bb890e925b6617b70608f9d6a91e24fe5db3549b8b021622500,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-f92d8,Uid:91c3484f-0d59-43e1-8869-616c298ee124,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY
,CreatedAt:1721180609920495520,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-f92d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c3484f-0d59-43e1-8869-616c298ee124,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T01:43:29.610891700Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=760fcbb3-e346-4761-9c5b-9f0810855865 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 01:43:58 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:58.015426628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=510e08a9-17f3-4d66-97fa-84eb91aa0240 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:58 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:58.015499155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=510e08a9-17f3-4d66-97fa-84eb91aa0240 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:58 kubernetes-upgrade-572332 crio[2782]: time="2024-07-17 01:43:58.016075669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:223f3073aeaec4654e9ab6df8708898e22b14ef6a58eb22ee9488e9cc70d41db,PodSandboxId:0290de3f7f6aedac9dbaf2a2fcf79367094b0026ccdb2d39875d9ae102cb9a92,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180634842831599,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-f92d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c3484f-0d59-43e1-8869-616c298ee124,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04060564b86a5a897513fbe6c790be8f8d441de142d1222f76d0ce4782577978,PodSandboxId:7b7322bba73194467a8cd9b016794597be7ee55f11c6c364f179ae3a64e9858b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180634763665120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-s9lh4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b364f161-0c01-46ad-9afc-c414a6c5e78e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22dbd66eadb0f9650ec3edc90f7644500e02d768b15c28da609203d385beb5b5,PodSandboxId:91422a7eb17947ccc87457baa355f3fa28dda581fe860870340f7a6dbf32ac4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAIN
ER_RUNNING,CreatedAt:1721180634127003140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn9q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b34eb26d449b6c082d26c56e412eeecd2df3d62c23e7d28ff6eef881d307a1eb,PodSandboxId:d355d833213a7c9281d0079f64ff33ead862f91ba60e9ae32902e15994f1af01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
1180634077265651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812bcb2f-554a-4dcf-a663-9e56a8fb0d91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:737e35cedcfbb4949b607cfd9e9577ce2f53cdd962fd54cc074fbc90282ee956,PodSandboxId:042ad510d67506c3512c30ac8e237ab12c4343072e2e299a8732f21bffee4fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721180630318373062,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c61b5fa1356caa02af5661d28d2740c,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74ee2f9e65a6f30cc96ad4b1155f36491d949aac8ee6c20dd804130dbe6414d2,PodSandboxId:dadd23ae90aab8875905b6c38336bc4a10faad6576c746595219c7fb0c39612f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721180630299508457,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b44cf8ecf986c0cee7675f3968d7e091,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be0c07de111877b9ba86fb5553a578f611dd5bcc911f3a4a9fe7c471d8707a,PodSandboxId:4e49888c39df1b1d28af044a79e67516e80fb030351dd89d05b15221ce9d6c17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721180630282919614,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aebec6d072e764be935c41f6bd09e2d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ef3f3879d730c3633847746ed042965d2b610156fe27d66410e84fb21bdeb31,PodSandboxId:f0a418b11ae8ced5596432d6bfc4dfcbd1b74a5747a92a1476a8c902aa04e172,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721180630302685123,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f84b6ebb8a2628292f6b6dcb4d2e22a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5966c8000b93b1c0962b84814e2110437e486e84ecdf442d38541874a44873be,PodSandboxId:91422a7eb17947ccc87457baa355f3fa28dda581fe860870340f7a6dbf32ac4b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721180626660675075,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn9q6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3e5ec84c8d794a27992b18e1ce9bb2898c8c69114e1d088098933b9bf5b7b1e,PodSandboxId:d355d833213a7c9281d0079f64ff33ead862f91ba60e9ae32902e15994f1af01,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721180626628199594,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 812bcb2f-554a-4dcf-a663-9e56a8fb0d91,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48340398920c2600b89850ae66b7f9a93f7cb12cd0ea47452bd8ca89566c3529,PodSandboxId:042ad510d67506c3512c30ac8e237ab12c4343072e2e299a8732f21bffee4fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721180626543371974,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c61b5fa1356caa02af5661d28d2740c,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b6fe9631f123ce54faf52fbdb9334efff1af9648656e141548e2d9aebe0b66,PodSandboxId:f0a418b11ae8ced5596432d6bfc4dfcbd1b74a5747a92a1476a8c902aa04e172,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721180626419747665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f84b6ebb8a2628292f6b6dcb4d2e22a,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17a581414bad11fca362a486a2564b74e7b38cec46d74380831f213bd4d0640,PodSandboxId:209ef031e4b10dc89b84fad34b87e438ca32aea7fd33ad68763caeaa212887b4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721180624975131076,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.
name: kube-apiserver-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aebec6d072e764be935c41f6bd09e2d,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed46e4b3937b1defb6500b7e739e2e6405eaba172b5f7257a39ad610e27046ec,PodSandboxId:1073d8eaccc3f230bf84ebd8a9ae97ccd93b0b6e550c6608d01da3058a3eec92,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721180624496460157,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-572332,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b44cf8ecf986c0cee7675f3968d7e091,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd07d30e735362109b4e7c6e0d60bbcc9734e834a59c22890a925a8d92c7a165,PodSandboxId:4cbfb449f1fdd933d86f03407299c7abbdcfefa4d9a47f2aabf0aa7626d2a7db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180610225929861,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-s
9lh4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b364f161-0c01-46ad-9afc-c414a6c5e78e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:829fff2479d2758ba399becaeff3d243eca612aea1c3e31f135abba261b67f2b,PodSandboxId:c177c24fc0433bb890e925b6617b70608f9d6a91e24fe5db3549b8b021622500,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180610216840259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-f92d8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91c3484f-0d59-43e1-8869-616c298ee124,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=510e08a9-17f3-4d66-97fa-84eb91aa0240 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	223f3073aeaec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   1                   0290de3f7f6ae       coredns-5cfdc65f69-f92d8
	04060564b86a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   1                   7b7322bba7319       coredns-5cfdc65f69-s9lh4
	22dbd66eadb0f       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   3 seconds ago       Running             kube-proxy                2                   91422a7eb1794       kube-proxy-pn9q6
	b34eb26d449b6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       2                   d355d833213a7       storage-provisioner
	737e35cedcfbb       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago       Running             etcd                      2                   042ad510d6750       etcd-kubernetes-upgrade-572332
	4ef3f3879d730       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   2                   f0a418b11ae8c       kube-controller-manager-kubernetes-upgrade-572332
	74ee2f9e65a6f       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            2                   dadd23ae90aab       kube-scheduler-kubernetes-upgrade-572332
	59be0c07de111       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            2                   4e49888c39df1       kube-apiserver-kubernetes-upgrade-572332
	5966c8000b93b       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   11 seconds ago      Exited              kube-proxy                1                   91422a7eb1794       kube-proxy-pn9q6
	c3e5ec84c8d79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Exited              storage-provisioner       1                   d355d833213a7       storage-provisioner
	48340398920c2       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   11 seconds ago      Exited              etcd                      1                   042ad510d6750       etcd-kubernetes-upgrade-572332
	35b6fe9631f12       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   11 seconds ago      Exited              kube-controller-manager   1                   f0a418b11ae8c       kube-controller-manager-kubernetes-upgrade-572332
	d17a581414bad       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   13 seconds ago      Exited              kube-apiserver            1                   209ef031e4b10       kube-apiserver-kubernetes-upgrade-572332
	ed46e4b3937b1       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   13 seconds ago      Exited              kube-scheduler            1                   1073d8eaccc3f       kube-scheduler-kubernetes-upgrade-572332
	cd07d30e73536       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   27 seconds ago      Exited              coredns                   0                   4cbfb449f1fdd       coredns-5cfdc65f69-s9lh4
	829fff2479d27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   27 seconds ago      Exited              coredns                   0                   c177c24fc0433       coredns-5cfdc65f69-f92d8
	
	
	==> coredns [04060564b86a5a897513fbe6c790be8f8d441de142d1222f76d0ce4782577978] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [223f3073aeaec4654e9ab6df8708898e22b14ef6a58eb22ee9488e9cc70d41db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [829fff2479d2758ba399becaeff3d243eca612aea1c3e31f135abba261b67f2b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cd07d30e735362109b4e7c6e0d60bbcc9734e834a59c22890a925a8d92c7a165] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-572332
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-572332
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:43:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-572332
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:43:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:43:53 +0000   Wed, 17 Jul 2024 01:43:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:43:53 +0000   Wed, 17 Jul 2024 01:43:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:43:53 +0000   Wed, 17 Jul 2024 01:43:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:43:53 +0000   Wed, 17 Jul 2024 01:43:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.73
	  Hostname:    kubernetes-upgrade-572332
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a12f2798327b4c3f8bcd525abc5f3b70
	  System UUID:                a12f2798-327b-4c3f-8bcd-525abc5f3b70
	  Boot ID:                    26a0e2d6-48db-47c3-afe2-5f720d5cf527
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-f92d8                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     31s
	  kube-system                 coredns-5cfdc65f69-s9lh4                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     31s
	  kube-system                 etcd-kubernetes-upgrade-572332                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         34s
	  kube-system                 kube-apiserver-kubernetes-upgrade-572332             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-572332    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-proxy-pn9q6                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-kubernetes-upgrade-572332             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    42s (x8 over 43s)  kubelet          Node kubernetes-upgrade-572332 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x7 over 43s)  kubelet          Node kubernetes-upgrade-572332 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  42s (x8 over 43s)  kubelet          Node kubernetes-upgrade-572332 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           30s                node-controller  Node kubernetes-upgrade-572332 event: Registered Node kubernetes-upgrade-572332 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-572332 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-572332 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-572332 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           0s                 node-controller  Node kubernetes-upgrade-572332 event: Registered Node kubernetes-upgrade-572332 in Controller
	
	
	==> dmesg <==
	[  +1.635980] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 01:43] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.072650] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075486] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.202014] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.152160] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.343225] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +4.679222] systemd-fstab-generator[737]: Ignoring "noauto" option for root device
	[  +0.061828] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.835215] systemd-fstab-generator[860]: Ignoring "noauto" option for root device
	[ +10.053296] systemd-fstab-generator[1253]: Ignoring "noauto" option for root device
	[  +0.115089] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.260031] kauditd_printk_skb: 103 callbacks suppressed
	[ +12.628206] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +0.155385] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +0.229266] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +0.200838] systemd-fstab-generator[2302]: Ignoring "noauto" option for root device
	[  +0.721881] systemd-fstab-generator[2545]: Ignoring "noauto" option for root device
	[  +1.157581] systemd-fstab-generator[2912]: Ignoring "noauto" option for root device
	[  +3.569447] systemd-fstab-generator[3472]: Ignoring "noauto" option for root device
	[  +0.080061] kauditd_printk_skb: 252 callbacks suppressed
	[  +5.035997] kauditd_printk_skb: 67 callbacks suppressed
	[  +1.477144] systemd-fstab-generator[4232]: Ignoring "noauto" option for root device
	
	
	==> etcd [48340398920c2600b89850ae66b7f9a93f7cb12cd0ea47452bd8ca89566c3529] <==
	{"level":"info","ts":"2024-07-17T01:43:46.91271Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-17T01:43:46.922784Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"650b7a69be526515","local-member-id":"7cceac74c4078a69","commit-index":399}
	{"level":"info","ts":"2024-07-17T01:43:46.929211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7cceac74c4078a69 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-17T01:43:46.929432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7cceac74c4078a69 became follower at term 2"}
	{"level":"info","ts":"2024-07-17T01:43:46.929472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 7cceac74c4078a69 [peers: [], term: 2, commit: 399, applied: 0, lastindex: 399, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-17T01:43:46.941503Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-17T01:43:46.972405Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":391}
	{"level":"info","ts":"2024-07-17T01:43:47.000508Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-17T01:43:47.010037Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"7cceac74c4078a69","timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:43:47.01459Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"7cceac74c4078a69"}
	{"level":"info","ts":"2024-07-17T01:43:47.014686Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"7cceac74c4078a69","local-server-version":"3.5.14","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-17T01:43:47.014981Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-17T01:43:47.015163Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:43:47.01523Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:43:47.015246Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:43:47.015568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7cceac74c4078a69 switched to configuration voters=(8993315123410471529)"}
	{"level":"info","ts":"2024-07-17T01:43:47.015682Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"650b7a69be526515","local-member-id":"7cceac74c4078a69","added-peer-id":"7cceac74c4078a69","added-peer-peer-urls":["https://192.168.72.73:2380"]}
	{"level":"info","ts":"2024-07-17T01:43:47.015829Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"650b7a69be526515","local-member-id":"7cceac74c4078a69","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:43:47.015885Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:43:47.026436Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:43:47.038595Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:43:47.038882Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.72.73:2380"}
	{"level":"info","ts":"2024-07-17T01:43:47.039045Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.73:2380"}
	{"level":"info","ts":"2024-07-17T01:43:47.045516Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:43:47.045453Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7cceac74c4078a69","initial-advertise-peer-urls":["https://192.168.72.73:2380"],"listen-peer-urls":["https://192.168.72.73:2380"],"advertise-client-urls":["https://192.168.72.73:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.73:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> etcd [737e35cedcfbb4949b607cfd9e9577ce2f53cdd962fd54cc074fbc90282ee956] <==
	{"level":"info","ts":"2024-07-17T01:43:50.744129Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:43:50.746554Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"650b7a69be526515","local-member-id":"7cceac74c4078a69","added-peer-id":"7cceac74c4078a69","added-peer-peer-urls":["https://192.168.72.73:2380"]}
	{"level":"info","ts":"2024-07-17T01:43:50.746684Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"650b7a69be526515","local-member-id":"7cceac74c4078a69","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:43:50.746728Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:43:50.750616Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:43:50.750976Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7cceac74c4078a69","initial-advertise-peer-urls":["https://192.168.72.73:2380"],"listen-peer-urls":["https://192.168.72.73:2380"],"advertise-client-urls":["https://192.168.72.73:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.73:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:43:50.751035Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:43:50.751149Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.72.73:2380"}
	{"level":"info","ts":"2024-07-17T01:43:50.751186Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.73:2380"}
	{"level":"info","ts":"2024-07-17T01:43:51.811871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7cceac74c4078a69 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:43:51.811926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7cceac74c4078a69 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:43:51.811965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7cceac74c4078a69 received MsgPreVoteResp from 7cceac74c4078a69 at term 2"}
	{"level":"info","ts":"2024-07-17T01:43:51.811987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7cceac74c4078a69 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:43:51.811998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7cceac74c4078a69 received MsgVoteResp from 7cceac74c4078a69 at term 3"}
	{"level":"info","ts":"2024-07-17T01:43:51.812007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7cceac74c4078a69 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:43:51.812032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7cceac74c4078a69 elected leader 7cceac74c4078a69 at term 3"}
	{"level":"info","ts":"2024-07-17T01:43:51.816967Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:43:51.817888Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:43:51.818764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.73:2379"}
	{"level":"info","ts":"2024-07-17T01:43:51.819027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:43:51.819713Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T01:43:51.820464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:43:51.816923Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7cceac74c4078a69","local-member-attributes":"{Name:kubernetes-upgrade-572332 ClientURLs:[https://192.168.72.73:2379]}","request-path":"/0/members/7cceac74c4078a69/attributes","cluster-id":"650b7a69be526515","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:43:51.828811Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:43:51.828872Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:43:58 up 1 min,  0 users,  load average: 2.20, 0.60, 0.20
	Linux kubernetes-upgrade-572332 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [59be0c07de111877b9ba86fb5553a578f611dd5bcc911f3a4a9fe7c471d8707a] <==
	I0717 01:43:53.566572       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 01:43:53.566614       1 policy_source.go:224] refreshing policies
	I0717 01:43:53.590976       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:43:53.593494       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 01:43:53.594649       1 aggregator.go:171] initial CRD sync complete...
	I0717 01:43:53.594914       1 autoregister_controller.go:144] Starting autoregister controller
	I0717 01:43:53.594996       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 01:43:53.595092       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:43:53.594916       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:43:53.616630       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 01:43:53.642662       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 01:43:53.642912       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0717 01:43:53.642946       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0717 01:43:53.643990       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 01:43:53.654184       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0717 01:43:53.654415       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0717 01:43:53.710148       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0717 01:43:54.374248       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:43:54.470045       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:43:55.733252       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:43:55.764354       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:43:55.860012       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:43:55.955460       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:43:55.963628       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:43:57.999991       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [d17a581414bad11fca362a486a2564b74e7b38cec46d74380831f213bd4d0640] <==
	
	
	==> kube-controller-manager [35b6fe9631f123ce54faf52fbdb9334efff1af9648656e141548e2d9aebe0b66] <==
	I0717 01:43:47.802035       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [4ef3f3879d730c3633847746ed042965d2b610156fe27d66410e84fb21bdeb31] <==
	I0717 01:43:58.033250       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0717 01:43:58.033489       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-572332"
	I0717 01:43:58.033547       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0717 01:43:58.039453       1 shared_informer.go:320] Caches are synced for PVC protection
	I0717 01:43:58.039519       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 01:43:58.040603       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 01:43:58.050989       1 shared_informer.go:320] Caches are synced for endpoint
	I0717 01:43:58.073712       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0717 01:43:58.081575       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0717 01:43:58.081911       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0717 01:43:58.082563       1 shared_informer.go:320] Caches are synced for cronjob
	I0717 01:43:58.099609       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 01:43:58.135866       1 shared_informer.go:320] Caches are synced for job
	I0717 01:43:58.137148       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 01:43:58.145415       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:43:58.145448       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 01:43:58.149578       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:43:58.164062       1 shared_informer.go:320] Caches are synced for PV protection
	I0717 01:43:58.169401       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:43:58.179076       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:43:58.183493       1 shared_informer.go:320] Caches are synced for deployment
	I0717 01:43:58.188414       1 shared_informer.go:320] Caches are synced for disruption
	I0717 01:43:58.196277       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0717 01:43:58.229745       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="33.301832ms"
	I0717 01:43:58.230420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="285.387µs"
	
	
	==> kube-proxy [22dbd66eadb0f9650ec3edc90f7644500e02d768b15c28da609203d385beb5b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 01:43:54.664452       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 01:43:54.689794       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.73"]
	E0717 01:43:54.689881       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 01:43:54.829960       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 01:43:54.830074       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:43:54.830118       1 server_linux.go:170] "Using iptables Proxier"
	I0717 01:43:54.849574       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 01:43:54.849974       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 01:43:54.850031       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:43:54.866857       1 config.go:197] "Starting service config controller"
	I0717 01:43:54.866874       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:43:54.866896       1 config.go:104] "Starting endpoint slice config controller"
	I0717 01:43:54.866900       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:43:54.867814       1 config.go:326] "Starting node config controller"
	I0717 01:43:54.867823       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:43:54.968088       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:43:54.968205       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:43:54.968388       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5966c8000b93b1c0962b84814e2110437e486e84ecdf442d38541874a44873be] <==
	I0717 01:43:47.300694       1 server_linux.go:67] "Using iptables proxy"
	E0717 01:43:47.324923       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 01:43:47.372621       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 01:43:47.374194       1 server.go:671] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-572332\": dial tcp 192.168.72.73:8443: connect: connection refused"
	
	
	==> kube-scheduler [74ee2f9e65a6f30cc96ad4b1155f36491d949aac8ee6c20dd804130dbe6414d2] <==
	I0717 01:43:51.638572       1 serving.go:386] Generated self-signed cert in-memory
	W0717 01:43:53.573090       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:43:53.573267       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:43:53.575161       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:43:53.575256       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:43:53.669644       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0717 01:43:53.669781       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:43:53.673193       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:43:53.673821       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:43:53.674015       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:43:53.674868       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0717 01:43:53.775575       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ed46e4b3937b1defb6500b7e739e2e6405eaba172b5f7257a39ad610e27046ec] <==
	
	
	==> kubelet <==
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:50.042156    3479 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b44cf8ecf986c0cee7675f3968d7e091-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-572332\" (UID: \"b44cf8ecf986c0cee7675f3968d7e091\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-572332"
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:50.072515    3479 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-572332"
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: E0717 01:43:50.073266    3479 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.73:8443: connect: connection refused" node="kubernetes-upgrade-572332"
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:50.257958    3479 scope.go:117] "RemoveContainer" containerID="d17a581414bad11fca362a486a2564b74e7b38cec46d74380831f213bd4d0640"
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:50.259994    3479 scope.go:117] "RemoveContainer" containerID="35b6fe9631f123ce54faf52fbdb9334efff1af9648656e141548e2d9aebe0b66"
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:50.263282    3479 scope.go:117] "RemoveContainer" containerID="ed46e4b3937b1defb6500b7e739e2e6405eaba172b5f7257a39ad610e27046ec"
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:50.263873    3479 scope.go:117] "RemoveContainer" containerID="48340398920c2600b89850ae66b7f9a93f7cb12cd0ea47452bd8ca89566c3529"
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: E0717 01:43:50.375962    3479 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-572332?timeout=10s\": dial tcp 192.168.72.73:8443: connect: connection refused" interval="800ms"
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:50.475415    3479 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-572332"
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: E0717 01:43:50.476620    3479 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.73:8443: connect: connection refused" node="kubernetes-upgrade-572332"
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: W0717 01:43:50.811669    3479 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.72.73:8443: connect: connection refused
	Jul 17 01:43:50 kubernetes-upgrade-572332 kubelet[3479]: E0717 01:43:50.811737    3479 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.72.73:8443: connect: connection refused" logger="UnhandledError"
	Jul 17 01:43:51 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:51.281833    3479 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-572332"
	Jul 17 01:43:53 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:53.664165    3479 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-572332"
	Jul 17 01:43:53 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:53.664882    3479 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-572332"
	Jul 17 01:43:53 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:53.665070    3479 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 01:43:53 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:53.666381    3479 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 01:43:53 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:53.742494    3479 apiserver.go:52] "Watching apiserver"
	Jul 17 01:43:53 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:53.779434    3479 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2-lib-modules\") pod \"kube-proxy-pn9q6\" (UID: \"a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2\") " pod="kube-system/kube-proxy-pn9q6"
	Jul 17 01:43:53 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:53.779511    3479 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/812bcb2f-554a-4dcf-a663-9e56a8fb0d91-tmp\") pod \"storage-provisioner\" (UID: \"812bcb2f-554a-4dcf-a663-9e56a8fb0d91\") " pod="kube-system/storage-provisioner"
	Jul 17 01:43:53 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:53.780077    3479 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2-xtables-lock\") pod \"kube-proxy-pn9q6\" (UID: \"a53f60fc-0f8e-4bb9-8c37-fb26f73cfcd2\") " pod="kube-system/kube-proxy-pn9q6"
	Jul 17 01:43:53 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:53.781907    3479 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 17 01:43:54 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:54.053644    3479 scope.go:117] "RemoveContainer" containerID="c3e5ec84c8d794a27992b18e1ce9bb2898c8c69114e1d088098933b9bf5b7b1e"
	Jul 17 01:43:54 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:54.054675    3479 scope.go:117] "RemoveContainer" containerID="5966c8000b93b1c0962b84814e2110437e486e84ecdf442d38541874a44873be"
	Jul 17 01:43:57 kubernetes-upgrade-572332 kubelet[3479]: I0717 01:43:57.012800    3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [b34eb26d449b6c082d26c56e412eeecd2df3d62c23e7d28ff6eef881d307a1eb] <==
	I0717 01:43:54.300795       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:43:54.338529       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:43:54.338774       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:43:54.402210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:43:54.403514       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afb655f6-bd2b-4d9a-8d53-8ce6d5ab8071", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-572332_d2415ff7-6e6c-4a63-a1fa-aa60f7f1b946 became leader
	I0717 01:43:54.404183       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-572332_d2415ff7-6e6c-4a63-a1fa-aa60f7f1b946!
	I0717 01:43:54.504502       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-572332_d2415ff7-6e6c-4a63-a1fa-aa60f7f1b946!
	
	
	==> storage-provisioner [c3e5ec84c8d794a27992b18e1ce9bb2898c8c69114e1d088098933b9bf5b7b1e] <==
	I0717 01:43:47.159969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 01:43:47.199140       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:43:57.493018   60567 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19264-3908/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-572332 -n kubernetes-upgrade-572332
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-572332 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-572332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-572332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-572332: (1.105050026s)
--- FAIL: TestKubernetesUpgrade (350.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-056024 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-056024 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.689497583s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-056024] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-056024" primary control-plane node in "pause-056024" cluster
	* Updating the running kvm2 "pause-056024" VM ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-056024" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:42:31.918769   56553 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:42:31.919015   56553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:42:31.919024   56553 out.go:304] Setting ErrFile to fd 2...
	I0717 01:42:31.919031   56553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:42:31.919233   56553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:42:31.919745   56553 out.go:298] Setting JSON to false
	I0717 01:42:31.920696   56553 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5094,"bootTime":1721175458,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:42:31.920749   56553 start.go:139] virtualization: kvm guest
	I0717 01:42:31.923081   56553 out.go:177] * [pause-056024] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:42:31.924733   56553 notify.go:220] Checking for updates...
	I0717 01:42:31.924795   56553 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:42:31.926462   56553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:42:31.928039   56553 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:42:31.929366   56553 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:42:31.930667   56553 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:42:31.931919   56553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:42:31.933649   56553 config.go:182] Loaded profile config "pause-056024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:42:31.934194   56553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:42:31.934254   56553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:42:31.949029   56553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39801
	I0717 01:42:31.949396   56553 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:42:31.949872   56553 main.go:141] libmachine: Using API Version  1
	I0717 01:42:31.949893   56553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:42:31.950190   56553 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:42:31.950362   56553 main.go:141] libmachine: (pause-056024) Calling .DriverName
	I0717 01:42:31.950670   56553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:42:31.950966   56553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:42:31.951009   56553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:42:31.965033   56553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0717 01:42:31.965409   56553 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:42:31.965896   56553 main.go:141] libmachine: Using API Version  1
	I0717 01:42:31.965918   56553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:42:31.966183   56553 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:42:31.966345   56553 main.go:141] libmachine: (pause-056024) Calling .DriverName
	I0717 01:42:32.003357   56553 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:42:32.004679   56553 start.go:297] selected driver: kvm2
	I0717 01:42:32.004702   56553 start.go:901] validating driver "kvm2" against &{Name:pause-056024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.2 ClusterName:pause-056024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:42:32.004850   56553 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:42:32.005173   56553 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:42:32.005236   56553 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:42:32.020520   56553 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:42:32.021231   56553 cni.go:84] Creating CNI manager for ""
	I0717 01:42:32.021247   56553 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:42:32.021320   56553 start.go:340] cluster config:
	{Name:pause-056024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:pause-056024 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:42:32.021460   56553 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:42:32.023288   56553 out.go:177] * Starting "pause-056024" primary control-plane node in "pause-056024" cluster
	I0717 01:42:32.024505   56553 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:42:32.024549   56553 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 01:42:32.024559   56553 cache.go:56] Caching tarball of preloaded images
	I0717 01:42:32.024622   56553 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:42:32.024632   56553 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 01:42:32.024743   56553 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/pause-056024/config.json ...
	I0717 01:42:32.024914   56553 start.go:360] acquireMachinesLock for pause-056024: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:42:32.024952   56553 start.go:364] duration metric: took 21.385µs to acquireMachinesLock for "pause-056024"
	I0717 01:42:32.024965   56553 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:42:32.024972   56553 fix.go:54] fixHost starting: 
	I0717 01:42:32.025255   56553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:42:32.025290   56553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:42:32.039371   56553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0717 01:42:32.039733   56553 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:42:32.040137   56553 main.go:141] libmachine: Using API Version  1
	I0717 01:42:32.040153   56553 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:42:32.040444   56553 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:42:32.040625   56553 main.go:141] libmachine: (pause-056024) Calling .DriverName
	I0717 01:42:32.040816   56553 main.go:141] libmachine: (pause-056024) Calling .GetState
	I0717 01:42:32.042301   56553 fix.go:112] recreateIfNeeded on pause-056024: state=Running err=<nil>
	W0717 01:42:32.042322   56553 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:42:32.045125   56553 out.go:177] * Updating the running kvm2 "pause-056024" VM ...
	I0717 01:42:32.046559   56553 machine.go:94] provisionDockerMachine start ...
	I0717 01:42:32.046597   56553 main.go:141] libmachine: (pause-056024) Calling .DriverName
	I0717 01:42:32.046862   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHHostname
	I0717 01:42:32.049632   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.050106   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:32.050136   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.050309   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHPort
	I0717 01:42:32.050483   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:32.050641   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:32.050766   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHUsername
	I0717 01:42:32.050896   56553 main.go:141] libmachine: Using SSH client type: native
	I0717 01:42:32.051109   56553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 01:42:32.051121   56553 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:42:32.155138   56553 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056024
	
	I0717 01:42:32.155158   56553 main.go:141] libmachine: (pause-056024) Calling .GetMachineName
	I0717 01:42:32.155391   56553 buildroot.go:166] provisioning hostname "pause-056024"
	I0717 01:42:32.155413   56553 main.go:141] libmachine: (pause-056024) Calling .GetMachineName
	I0717 01:42:32.155589   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHHostname
	I0717 01:42:32.158225   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.158515   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:32.158528   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.158700   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHPort
	I0717 01:42:32.158885   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:32.159036   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:32.159194   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHUsername
	I0717 01:42:32.159367   56553 main.go:141] libmachine: Using SSH client type: native
	I0717 01:42:32.159534   56553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 01:42:32.159546   56553 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-056024 && echo "pause-056024" | sudo tee /etc/hostname
	I0717 01:42:32.279094   56553 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-056024
	
	I0717 01:42:32.279122   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHHostname
	I0717 01:42:32.282093   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.282444   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:32.282480   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.282690   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHPort
	I0717 01:42:32.282889   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:32.283076   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:32.283222   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHUsername
	I0717 01:42:32.283385   56553 main.go:141] libmachine: Using SSH client type: native
	I0717 01:42:32.283586   56553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 01:42:32.283603   56553 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-056024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-056024/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-056024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:42:32.392113   56553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:42:32.392138   56553 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:42:32.392156   56553 buildroot.go:174] setting up certificates
	I0717 01:42:32.392165   56553 provision.go:84] configureAuth start
	I0717 01:42:32.392173   56553 main.go:141] libmachine: (pause-056024) Calling .GetMachineName
	I0717 01:42:32.392428   56553 main.go:141] libmachine: (pause-056024) Calling .GetIP
	I0717 01:42:32.395346   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.395709   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:32.395738   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.395914   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHHostname
	I0717 01:42:32.398123   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.398477   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:32.398500   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.398680   56553 provision.go:143] copyHostCerts
	I0717 01:42:32.398812   56553 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:42:32.398838   56553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:42:32.398900   56553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:42:32.398996   56553 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:42:32.399005   56553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:42:32.399024   56553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:42:32.399076   56553 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:42:32.399082   56553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:42:32.399098   56553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:42:32.399164   56553 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.pause-056024 san=[127.0.0.1 192.168.39.97 localhost minikube pause-056024]
	I0717 01:42:32.673190   56553 provision.go:177] copyRemoteCerts
	I0717 01:42:32.673243   56553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:42:32.673265   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHHostname
	I0717 01:42:32.676117   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.676502   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:32.676535   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.676634   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHPort
	I0717 01:42:32.676869   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:32.677069   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHUsername
	I0717 01:42:32.677228   56553 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/pause-056024/id_rsa Username:docker}
	I0717 01:42:32.762128   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:42:32.793661   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0717 01:42:32.821301   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:42:32.847381   56553 provision.go:87] duration metric: took 455.203994ms to configureAuth
	I0717 01:42:32.847471   56553 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:42:32.847726   56553 config.go:182] Loaded profile config "pause-056024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:42:32.847841   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHHostname
	I0717 01:42:32.850715   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.851087   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:32.851125   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:32.851328   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHPort
	I0717 01:42:32.851544   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:32.851721   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:32.851855   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHUsername
	I0717 01:42:32.852041   56553 main.go:141] libmachine: Using SSH client type: native
	I0717 01:42:32.852231   56553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 01:42:32.852256   56553 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:42:38.379665   56553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:42:38.379687   56553 machine.go:97] duration metric: took 6.333114185s to provisionDockerMachine
	I0717 01:42:38.379700   56553 start.go:293] postStartSetup for "pause-056024" (driver="kvm2")
	I0717 01:42:38.379712   56553 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:42:38.379739   56553 main.go:141] libmachine: (pause-056024) Calling .DriverName
	I0717 01:42:38.380130   56553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:42:38.380159   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHHostname
	I0717 01:42:38.383119   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:38.383493   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:38.383519   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:38.383639   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHPort
	I0717 01:42:38.383828   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:38.384020   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHUsername
	I0717 01:42:38.384143   56553 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/pause-056024/id_rsa Username:docker}
	I0717 01:42:38.473865   56553 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:42:38.478083   56553 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:42:38.478105   56553 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:42:38.478163   56553 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:42:38.478245   56553 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:42:38.478341   56553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:42:38.489126   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:42:38.516278   56553 start.go:296] duration metric: took 136.564128ms for postStartSetup
	I0717 01:42:38.516313   56553 fix.go:56] duration metric: took 6.491340778s for fixHost
	I0717 01:42:38.516359   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHHostname
	I0717 01:42:38.519189   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:38.519495   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:38.519522   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:38.519660   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHPort
	I0717 01:42:38.519867   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:38.520058   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:38.520183   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHUsername
	I0717 01:42:38.520341   56553 main.go:141] libmachine: Using SSH client type: native
	I0717 01:42:38.520502   56553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 01:42:38.520513   56553 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 01:42:38.623210   56553 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180558.612538779
	
	I0717 01:42:38.623230   56553 fix.go:216] guest clock: 1721180558.612538779
	I0717 01:42:38.623236   56553 fix.go:229] Guest: 2024-07-17 01:42:38.612538779 +0000 UTC Remote: 2024-07-17 01:42:38.516316513 +0000 UTC m=+6.632259745 (delta=96.222266ms)
	I0717 01:42:38.623290   56553 fix.go:200] guest clock delta is within tolerance: 96.222266ms
	I0717 01:42:38.623298   56553 start.go:83] releasing machines lock for "pause-056024", held for 6.598336253s
	I0717 01:42:38.623329   56553 main.go:141] libmachine: (pause-056024) Calling .DriverName
	I0717 01:42:38.623561   56553 main.go:141] libmachine: (pause-056024) Calling .GetIP
	I0717 01:42:38.626320   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:38.626640   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:38.626674   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:38.626837   56553 main.go:141] libmachine: (pause-056024) Calling .DriverName
	I0717 01:42:38.627350   56553 main.go:141] libmachine: (pause-056024) Calling .DriverName
	I0717 01:42:38.627545   56553 main.go:141] libmachine: (pause-056024) Calling .DriverName
	I0717 01:42:38.627630   56553 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:42:38.627666   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHHostname
	I0717 01:42:38.627793   56553 ssh_runner.go:195] Run: cat /version.json
	I0717 01:42:38.627814   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHHostname
	I0717 01:42:38.630295   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:38.630716   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:38.630761   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:38.630784   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:38.630996   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHPort
	I0717 01:42:38.631167   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:38.631194   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:38.631223   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:38.631299   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHUsername
	I0717 01:42:38.631365   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHPort
	I0717 01:42:38.631434   56553 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/pause-056024/id_rsa Username:docker}
	I0717 01:42:38.631528   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHKeyPath
	I0717 01:42:38.631647   56553 main.go:141] libmachine: (pause-056024) Calling .GetSSHUsername
	I0717 01:42:38.631787   56553 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/pause-056024/id_rsa Username:docker}
	I0717 01:42:38.739483   56553 ssh_runner.go:195] Run: systemctl --version
	I0717 01:42:38.791327   56553 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:42:39.079307   56553 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:42:39.094024   56553 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:42:39.094100   56553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:42:39.164573   56553 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 01:42:39.164594   56553 start.go:495] detecting cgroup driver to use...
	I0717 01:42:39.164659   56553 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:42:39.212650   56553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:42:39.286956   56553 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:42:39.287018   56553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:42:39.421168   56553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:42:39.477986   56553 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:42:39.759594   56553 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:42:40.012087   56553 docker.go:233] disabling docker service ...
	I0717 01:42:40.012168   56553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:42:40.036413   56553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:42:40.057885   56553 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:42:40.287653   56553 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:42:40.530858   56553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:42:40.553053   56553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:42:40.590073   56553 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:42:40.590205   56553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:42:40.608037   56553 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:42:40.608111   56553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:42:40.625564   56553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:42:40.639700   56553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:42:40.662400   56553 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:42:40.679114   56553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:42:40.694431   56553 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:42:40.709051   56553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:42:40.724152   56553 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:42:40.736797   56553 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:42:40.748760   56553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:42:40.959343   56553 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:42:41.483175   56553 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:42:41.483251   56553 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:42:41.489241   56553 start.go:563] Will wait 60s for crictl version
	I0717 01:42:41.489301   56553 ssh_runner.go:195] Run: which crictl
	I0717 01:42:41.493230   56553 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:42:41.536883   56553 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:42:41.536964   56553 ssh_runner.go:195] Run: crio --version
	I0717 01:42:41.570086   56553 ssh_runner.go:195] Run: crio --version
	I0717 01:42:41.605211   56553 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:42:41.606789   56553 main.go:141] libmachine: (pause-056024) Calling .GetIP
	I0717 01:42:41.609484   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:41.609873   56553 main.go:141] libmachine: (pause-056024) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:2d:7b", ip: ""} in network mk-pause-056024: {Iface:virbr3 ExpiryTime:2024-07-17 02:41:09 +0000 UTC Type:0 Mac:52:54:00:77:2d:7b Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:pause-056024 Clientid:01:52:54:00:77:2d:7b}
	I0717 01:42:41.609893   56553 main.go:141] libmachine: (pause-056024) DBG | domain pause-056024 has defined IP address 192.168.39.97 and MAC address 52:54:00:77:2d:7b in network mk-pause-056024
	I0717 01:42:41.610148   56553 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:42:41.614715   56553 kubeadm.go:883] updating cluster {Name:pause-056024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:pause-056024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:42:41.614895   56553 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:42:41.614954   56553 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:42:41.662147   56553 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:42:41.662168   56553 crio.go:433] Images already preloaded, skipping extraction
	I0717 01:42:41.662211   56553 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:42:41.696936   56553 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:42:41.696959   56553 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:42:41.696967   56553 kubeadm.go:934] updating node { 192.168.39.97 8443 v1.30.2 crio true true} ...
	I0717 01:42:41.697099   56553 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-056024 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:pause-056024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:42:41.697175   56553 ssh_runner.go:195] Run: crio config
	I0717 01:42:41.747197   56553 cni.go:84] Creating CNI manager for ""
	I0717 01:42:41.747217   56553 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:42:41.747229   56553 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:42:41.747258   56553 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.97 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-056024 NodeName:pause-056024 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:42:41.747456   56553 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-056024"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:42:41.747526   56553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:42:41.757469   56553 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:42:41.757525   56553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:42:41.767327   56553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 01:42:41.783883   56553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:42:41.800644   56553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 01:42:41.817058   56553 ssh_runner.go:195] Run: grep 192.168.39.97	control-plane.minikube.internal$ /etc/hosts
	I0717 01:42:41.821008   56553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:42:42.039274   56553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:42:42.177306   56553 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/pause-056024 for IP: 192.168.39.97
	I0717 01:42:42.177331   56553 certs.go:194] generating shared ca certs ...
	I0717 01:42:42.177349   56553 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:42:42.177540   56553 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:42:42.177610   56553 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:42:42.177625   56553 certs.go:256] generating profile certs ...
	I0717 01:42:42.177744   56553 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/pause-056024/client.key
	I0717 01:42:42.177821   56553 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/pause-056024/apiserver.key.7f1c6d1f
	I0717 01:42:42.177871   56553 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/pause-056024/proxy-client.key
	I0717 01:42:42.178004   56553 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:42:42.178041   56553 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:42:42.178053   56553 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:42:42.178087   56553 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:42:42.178120   56553 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:42:42.178154   56553 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:42:42.178216   56553 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:42:42.179238   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:42:42.280183   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:42:42.427613   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:42:42.462840   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:42:42.505285   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/pause-056024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 01:42:42.537657   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/pause-056024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:42:42.573805   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/pause-056024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:42:42.605671   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/pause-056024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:42:42.639168   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:42:42.664714   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:42:42.690804   56553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:42:42.716515   56553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:42:42.733344   56553 ssh_runner.go:195] Run: openssl version
	I0717 01:42:42.738921   56553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:42:42.749126   56553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:42:42.753304   56553 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:42:42.753345   56553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:42:42.758799   56553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:42:42.767508   56553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:42:42.778001   56553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:42:42.782573   56553 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:42:42.782626   56553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:42:42.788985   56553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:42:42.799455   56553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:42:42.810927   56553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:42:42.815498   56553 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:42:42.815556   56553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:42:42.821281   56553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:42:42.830652   56553 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:42:42.835093   56553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:42:42.840699   56553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:42:42.846776   56553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:42:42.852386   56553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:42:42.858131   56553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:42:42.863831   56553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:42:42.869339   56553 kubeadm.go:392] StartCluster: {Name:pause-056024 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:pause-056024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:42:42.869466   56553 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:42:42.869512   56553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:42:42.906945   56553 cri.go:89] found id: "6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984"
	I0717 01:42:42.906974   56553 cri.go:89] found id: "27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9"
	I0717 01:42:42.906979   56553 cri.go:89] found id: "a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42"
	I0717 01:42:42.906984   56553 cri.go:89] found id: "cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140"
	I0717 01:42:42.906988   56553 cri.go:89] found id: "2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17"
	I0717 01:42:42.906992   56553 cri.go:89] found id: "b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50"
	I0717 01:42:42.906996   56553 cri.go:89] found id: ""
	I0717 01:42:42.907041   56553 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-056024 -n pause-056024
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-056024 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-056024 logs -n 25: (1.45087676s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-777345             | minikube                  | jenkins | v1.26.0 | 17 Jul 24 01:37 UTC | 17 Jul 24 01:39 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | cert-options-366095 ssh               | cert-options-366095       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC | 17 Jul 24 01:38 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-366095 -- sudo        | cert-options-366095       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC | 17 Jul 24 01:38 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-366095                | cert-options-366095       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC | 17 Jul 24 01:38 UTC |
	| start   | -p kubernetes-upgrade-572332          | kubernetes-upgrade-572332 | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-130517 sudo           | NoKubernetes-130517       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-130517                | NoKubernetes-130517       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC | 17 Jul 24 01:38 UTC |
	| start   | -p NoKubernetes-130517                | NoKubernetes-130517       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC | 17 Jul 24 01:39 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-130517 sudo           | NoKubernetes-130517       | jenkins | v1.33.1 | 17 Jul 24 01:39 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-130517                | NoKubernetes-130517       | jenkins | v1.33.1 | 17 Jul 24 01:39 UTC | 17 Jul 24 01:39 UTC |
	| start   | -p stopped-upgrade-156268             | minikube                  | jenkins | v1.26.0 | 17 Jul 24 01:39 UTC | 17 Jul 24 01:40 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-777345             | running-upgrade-777345    | jenkins | v1.33.1 | 17 Jul 24 01:39 UTC | 17 Jul 24 01:41 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-733994             | cert-expiration-733994    | jenkins | v1.33.1 | 17 Jul 24 01:40 UTC | 17 Jul 24 01:40 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-156268 stop           | minikube                  | jenkins | v1.26.0 | 17 Jul 24 01:40 UTC | 17 Jul 24 01:40 UTC |
	| start   | -p stopped-upgrade-156268             | stopped-upgrade-156268    | jenkins | v1.33.1 | 17 Jul 24 01:40 UTC | 17 Jul 24 01:41 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-733994             | cert-expiration-733994    | jenkins | v1.33.1 | 17 Jul 24 01:40 UTC | 17 Jul 24 01:40 UTC |
	| start   | -p pause-056024 --memory=2048         | pause-056024              | jenkins | v1.33.1 | 17 Jul 24 01:40 UTC | 17 Jul 24 01:42 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-156268             | stopped-upgrade-156268    | jenkins | v1.33.1 | 17 Jul 24 01:41 UTC | 17 Jul 24 01:41 UTC |
	| start   | -p auto-894370 --memory=3072          | auto-894370               | jenkins | v1.33.1 | 17 Jul 24 01:41 UTC | 17 Jul 24 01:42 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-777345             | running-upgrade-777345    | jenkins | v1.33.1 | 17 Jul 24 01:41 UTC | 17 Jul 24 01:41 UTC |
	| start   | -p kindnet-894370                     | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:41 UTC | 17 Jul 24 01:43 UTC |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-056024                       | pause-056024              | jenkins | v1.33.1 | 17 Jul 24 01:42 UTC | 17 Jul 24 01:43 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-572332          | kubernetes-upgrade-572332 | jenkins | v1.33.1 | 17 Jul 24 01:42 UTC | 17 Jul 24 01:42 UTC |
	| start   | -p kubernetes-upgrade-572332          | kubernetes-upgrade-572332 | jenkins | v1.33.1 | 17 Jul 24 01:42 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-894370 pgrep -a               | auto-894370               | jenkins | v1.33.1 | 17 Jul 24 01:42 UTC | 17 Jul 24 01:42 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:42:46
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:42:46.990926   56726 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:42:46.991521   56726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:42:46.991573   56726 out.go:304] Setting ErrFile to fd 2...
	I0717 01:42:46.991590   56726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:42:46.992073   56726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:42:46.992977   56726 out.go:298] Setting JSON to false
	I0717 01:42:46.993942   56726 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5109,"bootTime":1721175458,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:42:46.994002   56726 start.go:139] virtualization: kvm guest
	I0717 01:42:46.996077   56726 out.go:177] * [kubernetes-upgrade-572332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:42:46.997711   56726 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:42:46.997719   56726 notify.go:220] Checking for updates...
	I0717 01:42:46.999041   56726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:42:47.000479   56726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:42:47.002040   56726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:42:47.003399   56726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:42:47.004841   56726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:42:47.006825   56726 config.go:182] Loaded profile config "kubernetes-upgrade-572332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:42:47.007435   56726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:42:47.007514   56726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:42:47.022670   56726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37221
	I0717 01:42:47.023056   56726 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:42:47.023660   56726 main.go:141] libmachine: Using API Version  1
	I0717 01:42:47.023684   56726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:42:47.024024   56726 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:42:47.024184   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:42:47.024388   56726 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:42:47.024663   56726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:42:47.024694   56726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:42:47.039790   56726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I0717 01:42:47.040230   56726 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:42:47.040732   56726 main.go:141] libmachine: Using API Version  1
	I0717 01:42:47.040758   56726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:42:47.041058   56726 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:42:47.041220   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:42:47.075668   56726 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:42:47.076943   56726 start.go:297] selected driver: kvm2
	I0717 01:42:47.076966   56726 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-572332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-572332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:42:47.077086   56726 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:42:47.077815   56726 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:42:47.077884   56726 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:42:47.092365   56726 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:42:47.092755   56726 cni.go:84] Creating CNI manager for ""
	I0717 01:42:47.092771   56726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:42:47.092811   56726 start.go:340] cluster config:
	{Name:kubernetes-upgrade-572332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-572332 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:42:47.092914   56726 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:42:47.094788   56726 out.go:177] * Starting "kubernetes-upgrade-572332" primary control-plane node in "kubernetes-upgrade-572332" cluster
	I0717 01:42:47.096040   56726 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:42:47.096072   56726 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:42:47.096087   56726 cache.go:56] Caching tarball of preloaded images
	I0717 01:42:47.096160   56726 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:42:47.096171   56726 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 01:42:47.096254   56726 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/config.json ...
	I0717 01:42:47.096419   56726 start.go:360] acquireMachinesLock for kubernetes-upgrade-572332: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:42:47.096463   56726 start.go:364] duration metric: took 27.008µs to acquireMachinesLock for "kubernetes-upgrade-572332"
	I0717 01:42:47.096477   56726 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:42:47.096484   56726 fix.go:54] fixHost starting: 
	I0717 01:42:47.096750   56726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:42:47.096778   56726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:42:47.111538   56726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46215
	I0717 01:42:47.111975   56726 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:42:47.112431   56726 main.go:141] libmachine: Using API Version  1
	I0717 01:42:47.112459   56726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:42:47.112818   56726 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:42:47.113003   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:42:47.113161   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetState
	I0717 01:42:47.114766   56726 fix.go:112] recreateIfNeeded on kubernetes-upgrade-572332: state=Stopped err=<nil>
	I0717 01:42:47.114791   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	W0717 01:42:47.114954   56726 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:42:47.116607   56726 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-572332" ...
	I0717 01:42:43.022694   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:45.523124   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:48.812297   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:42:48.812328   56553 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:42:48.812345   56553 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0717 01:42:48.842316   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:42:48.842354   56553 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:42:49.217413   56553 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0717 01:42:49.224330   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:42:49.224355   56553 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:42:49.717977   56553 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0717 01:42:49.723280   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:42:49.723306   56553 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:42:50.217848   56553 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0717 01:42:50.222773   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0717 01:42:50.233624   56553 api_server.go:141] control plane version: v1.30.2
	I0717 01:42:50.233657   56553 api_server.go:131] duration metric: took 4.516383523s to wait for apiserver health ...
	I0717 01:42:50.233669   56553 cni.go:84] Creating CNI manager for ""
	I0717 01:42:50.233677   56553 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:42:50.235367   56553 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:42:47.152171   55685 pod_ready.go:102] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:49.652369   55685 pod_ready.go:102] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:50.236838   56553 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:42:50.257695   56553 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:42:50.282541   56553 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:42:50.295280   56553 system_pods.go:59] 6 kube-system pods found
	I0717 01:42:50.295332   56553 system_pods.go:61] "coredns-7db6d8ff4d-gkx7k" [f2471beb-346c-4784-a3c5-a5ecc6f8e8a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:42:50.295350   56553 system_pods.go:61] "etcd-pause-056024" [f2c958e6-5ec8-4981-bf04-604483fcde3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:42:50.295361   56553 system_pods.go:61] "kube-apiserver-pause-056024" [fa9af1c1-de22-4ee5-9829-94cc31bd33f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:42:50.295374   56553 system_pods.go:61] "kube-controller-manager-pause-056024" [f8aabc0f-39bf-4e32-a58d-4ba9c97108a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:42:50.295381   56553 system_pods.go:61] "kube-proxy-w9cq7" [f908608a-f77f-4653-86a3-1b535c9c6973] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:42:50.295398   56553 system_pods.go:61] "kube-scheduler-pause-056024" [386119a0-65b2-435a-bca3-0d3b198466d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:42:50.295409   56553 system_pods.go:74] duration metric: took 12.831601ms to wait for pod list to return data ...
	I0717 01:42:50.295421   56553 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:42:50.303589   56553 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:42:50.303617   56553 node_conditions.go:123] node cpu capacity is 2
	I0717 01:42:50.303626   56553 node_conditions.go:105] duration metric: took 8.200431ms to run NodePressure ...
	I0717 01:42:50.303642   56553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:42:50.581641   56553 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:42:50.586457   56553 kubeadm.go:739] kubelet initialised
	I0717 01:42:50.586480   56553 kubeadm.go:740] duration metric: took 4.814288ms waiting for restarted kubelet to initialise ...
	I0717 01:42:50.586487   56553 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:50.594301   56553 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:47.117813   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .Start
	I0717 01:42:47.117974   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Ensuring networks are active...
	I0717 01:42:47.118720   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Ensuring network default is active
	I0717 01:42:47.119095   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Ensuring network mk-kubernetes-upgrade-572332 is active
	I0717 01:42:47.119564   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Getting domain xml...
	I0717 01:42:47.120283   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Creating domain...
	I0717 01:42:48.429542   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Waiting to get IP...
	I0717 01:42:48.430720   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:48.432277   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:48.432311   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:48.432230   56761 retry.go:31] will retry after 295.069451ms: waiting for machine to come up
	I0717 01:42:48.728840   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:48.729539   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:48.729567   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:48.729493   56761 retry.go:31] will retry after 280.403381ms: waiting for machine to come up
	I0717 01:42:49.012047   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.012518   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.012545   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:49.012467   56761 retry.go:31] will retry after 447.434458ms: waiting for machine to come up
	I0717 01:42:49.460984   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.461614   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.461640   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:49.461550   56761 retry.go:31] will retry after 494.900191ms: waiting for machine to come up
	I0717 01:42:49.958521   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.959123   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.959149   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:49.959078   56761 retry.go:31] will retry after 572.895268ms: waiting for machine to come up
	I0717 01:42:50.533893   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:50.534397   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:50.534424   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:50.534348   56761 retry.go:31] will retry after 846.063347ms: waiting for machine to come up
	I0717 01:42:51.382151   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:51.382656   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:51.382678   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:51.382611   56761 retry.go:31] will retry after 806.363036ms: waiting for machine to come up
	I0717 01:42:47.523253   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:50.023476   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:52.151857   55685 pod_ready.go:102] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:54.651093   55685 pod_ready.go:102] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:52.601856   56553 pod_ready.go:102] pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:54.602418   56553 pod_ready.go:102] pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:56.602511   56553 pod_ready.go:102] pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:52.190775   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:52.191232   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:52.191269   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:52.191198   56761 retry.go:31] will retry after 1.023150099s: waiting for machine to come up
	I0717 01:42:53.215981   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:53.216572   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:53.216610   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:53.216524   56761 retry.go:31] will retry after 1.472682341s: waiting for machine to come up
	I0717 01:42:54.690501   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:54.691031   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:54.691056   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:54.690979   56761 retry.go:31] will retry after 2.283481718s: waiting for machine to come up
	I0717 01:42:56.977468   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:56.978087   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:56.978123   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:56.978024   56761 retry.go:31] will retry after 2.71877136s: waiting for machine to come up
	I0717 01:42:52.522256   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:54.522301   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:56.522883   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:56.651544   55685 pod_ready.go:102] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:57.153099   55685 pod_ready.go:92] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.153125   55685 pod_ready.go:81] duration metric: took 41.508389081s for pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.153138   55685 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-zt5wx" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.155085   55685 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-zt5wx" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-zt5wx" not found
	I0717 01:42:57.155112   55685 pod_ready.go:81] duration metric: took 1.966636ms for pod "coredns-7db6d8ff4d-zt5wx" in "kube-system" namespace to be "Ready" ...
	E0717 01:42:57.155122   55685 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-zt5wx" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-zt5wx" not found
	I0717 01:42:57.155127   55685 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.160312   55685 pod_ready.go:92] pod "etcd-auto-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.160332   55685 pod_ready.go:81] duration metric: took 5.198765ms for pod "etcd-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.160342   55685 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.164432   55685 pod_ready.go:92] pod "kube-apiserver-auto-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.164448   55685 pod_ready.go:81] duration metric: took 4.098742ms for pod "kube-apiserver-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.164455   55685 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.168472   55685 pod_ready.go:92] pod "kube-controller-manager-auto-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.168491   55685 pod_ready.go:81] duration metric: took 4.028636ms for pod "kube-controller-manager-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.168501   55685 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-lq55v" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.351052   55685 pod_ready.go:92] pod "kube-proxy-lq55v" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.351085   55685 pod_ready.go:81] duration metric: took 182.575634ms for pod "kube-proxy-lq55v" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.351098   55685 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.750123   55685 pod_ready.go:92] pod "kube-scheduler-auto-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.750151   55685 pod_ready.go:81] duration metric: took 399.045886ms for pod "kube-scheduler-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.750204   55685 pod_ready.go:38] duration metric: took 42.113651427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:57.750226   55685 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:42:57.750283   55685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:42:57.771681   55685 api_server.go:72] duration metric: took 42.670921869s to wait for apiserver process to appear ...
	I0717 01:42:57.771758   55685 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:42:57.771785   55685 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0717 01:42:57.782236   55685 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0717 01:42:57.784476   55685 api_server.go:141] control plane version: v1.30.2
	I0717 01:42:57.784543   55685 api_server.go:131] duration metric: took 12.77429ms to wait for apiserver health ...
	I0717 01:42:57.784555   55685 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:42:57.953860   55685 system_pods.go:59] 7 kube-system pods found
	I0717 01:42:57.953897   55685 system_pods.go:61] "coredns-7db6d8ff4d-4q8q7" [d8103677-9dcf-4aef-9581-e0ddec7a1aaa] Running
	I0717 01:42:57.953904   55685 system_pods.go:61] "etcd-auto-894370" [65f9b263-0e6b-4201-abe2-b504b5712588] Running
	I0717 01:42:57.953909   55685 system_pods.go:61] "kube-apiserver-auto-894370" [2d92bc2d-4d53-47da-8863-3ed8036b1185] Running
	I0717 01:42:57.953914   55685 system_pods.go:61] "kube-controller-manager-auto-894370" [f206f4a3-c3ed-4977-9c4c-ae47953feb20] Running
	I0717 01:42:57.953922   55685 system_pods.go:61] "kube-proxy-lq55v" [8b029947-4c40-4479-86bb-fd4b4ea01d08] Running
	I0717 01:42:57.953926   55685 system_pods.go:61] "kube-scheduler-auto-894370" [8743f3ef-bb58-44b2-8f1c-ff6dcf06e153] Running
	I0717 01:42:57.953931   55685 system_pods.go:61] "storage-provisioner" [b8aeb785-acb0-4c62-87e5-06d87056ff6d] Running
	I0717 01:42:57.953939   55685 system_pods.go:74] duration metric: took 169.376299ms to wait for pod list to return data ...
	I0717 01:42:57.953949   55685 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:42:58.149482   55685 default_sa.go:45] found service account: "default"
	I0717 01:42:58.149519   55685 default_sa.go:55] duration metric: took 195.561535ms for default service account to be created ...
	I0717 01:42:58.149530   55685 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:42:58.353251   55685 system_pods.go:86] 7 kube-system pods found
	I0717 01:42:58.353281   55685 system_pods.go:89] "coredns-7db6d8ff4d-4q8q7" [d8103677-9dcf-4aef-9581-e0ddec7a1aaa] Running
	I0717 01:42:58.353290   55685 system_pods.go:89] "etcd-auto-894370" [65f9b263-0e6b-4201-abe2-b504b5712588] Running
	I0717 01:42:58.353296   55685 system_pods.go:89] "kube-apiserver-auto-894370" [2d92bc2d-4d53-47da-8863-3ed8036b1185] Running
	I0717 01:42:58.353303   55685 system_pods.go:89] "kube-controller-manager-auto-894370" [f206f4a3-c3ed-4977-9c4c-ae47953feb20] Running
	I0717 01:42:58.353309   55685 system_pods.go:89] "kube-proxy-lq55v" [8b029947-4c40-4479-86bb-fd4b4ea01d08] Running
	I0717 01:42:58.353316   55685 system_pods.go:89] "kube-scheduler-auto-894370" [8743f3ef-bb58-44b2-8f1c-ff6dcf06e153] Running
	I0717 01:42:58.353321   55685 system_pods.go:89] "storage-provisioner" [b8aeb785-acb0-4c62-87e5-06d87056ff6d] Running
	I0717 01:42:58.353351   55685 system_pods.go:126] duration metric: took 203.8148ms to wait for k8s-apps to be running ...
	I0717 01:42:58.353364   55685 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:42:58.353448   55685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:42:58.391912   55685 system_svc.go:56] duration metric: took 38.53861ms WaitForService to wait for kubelet
	I0717 01:42:58.391944   55685 kubeadm.go:582] duration metric: took 43.291190101s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:42:58.391969   55685 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:42:58.549731   55685 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:42:58.549770   55685 node_conditions.go:123] node cpu capacity is 2
	I0717 01:42:58.549788   55685 node_conditions.go:105] duration metric: took 157.812536ms to run NodePressure ...
	I0717 01:42:58.549803   55685 start.go:241] waiting for startup goroutines ...
	I0717 01:42:58.549814   55685 start.go:246] waiting for cluster config update ...
	I0717 01:42:58.549827   55685 start.go:255] writing updated cluster config ...
	I0717 01:42:58.550194   55685 ssh_runner.go:195] Run: rm -f paused
	I0717 01:42:58.603781   55685 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:42:58.605769   55685 out.go:177] * Done! kubectl is now configured to use "auto-894370" cluster and "default" namespace by default
	I0717 01:42:57.522425   55923 node_ready.go:49] node "kindnet-894370" has status "Ready":"True"
	I0717 01:42:57.522450   55923 node_ready.go:38] duration metric: took 16.504181902s for node "kindnet-894370" to be "Ready" ...
	I0717 01:42:57.522459   55923 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:57.530462   55923 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-7kmmz" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.538931   55923 pod_ready.go:92] pod "coredns-7db6d8ff4d-7kmmz" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.538956   55923 pod_ready.go:81] duration metric: took 1.008467677s for pod "coredns-7db6d8ff4d-7kmmz" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.538967   55923 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-c8bzb" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.545369   55923 pod_ready.go:92] pod "coredns-7db6d8ff4d-c8bzb" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.545397   55923 pod_ready.go:81] duration metric: took 6.422536ms for pod "coredns-7db6d8ff4d-c8bzb" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.545411   55923 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.551414   55923 pod_ready.go:92] pod "etcd-kindnet-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.551438   55923 pod_ready.go:81] duration metric: took 6.018202ms for pod "etcd-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.551454   55923 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.556709   55923 pod_ready.go:92] pod "kube-apiserver-kindnet-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.556736   55923 pod_ready.go:81] duration metric: took 5.270094ms for pod "kube-apiserver-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.556749   55923 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.724495   55923 pod_ready.go:92] pod "kube-controller-manager-kindnet-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.724534   55923 pod_ready.go:81] duration metric: took 167.776237ms for pod "kube-controller-manager-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.724549   55923 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-xjmxc" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:59.122600   55923 pod_ready.go:92] pod "kube-proxy-xjmxc" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:59.122623   55923 pod_ready.go:81] duration metric: took 398.065163ms for pod "kube-proxy-xjmxc" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:59.122632   55923 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:59.525312   55923 pod_ready.go:92] pod "kube-scheduler-kindnet-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:59.525340   55923 pod_ready.go:81] duration metric: took 402.699932ms for pod "kube-scheduler-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:59.525353   55923 pod_ready.go:38] duration metric: took 2.002883775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:59.525372   55923 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:42:59.525426   55923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:42:59.543608   55923 api_server.go:72] duration metric: took 19.652683934s to wait for apiserver process to appear ...
	I0717 01:42:59.543633   55923 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:42:59.543650   55923 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0717 01:42:59.550426   55923 api_server.go:279] https://192.168.61.22:8443/healthz returned 200:
	ok
	I0717 01:42:59.551994   55923 api_server.go:141] control plane version: v1.30.2
	I0717 01:42:59.552013   55923 api_server.go:131] duration metric: took 8.375011ms to wait for apiserver health ...
	I0717 01:42:59.552022   55923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:42:59.729467   55923 system_pods.go:59] 9 kube-system pods found
	I0717 01:42:59.729509   55923 system_pods.go:61] "coredns-7db6d8ff4d-7kmmz" [dcf940c7-926d-4b26-9b7c-982e26ccf4e6] Running
	I0717 01:42:59.729517   55923 system_pods.go:61] "coredns-7db6d8ff4d-c8bzb" [eea52b3a-5e60-4927-b0d7-a54e08502f75] Running
	I0717 01:42:59.729522   55923 system_pods.go:61] "etcd-kindnet-894370" [ce3e78b4-4b89-4229-9657-27b901d63eba] Running
	I0717 01:42:59.729527   55923 system_pods.go:61] "kindnet-tjrjz" [5175b71b-f875-4cd6-b743-a3b9059ac1d5] Running
	I0717 01:42:59.729531   55923 system_pods.go:61] "kube-apiserver-kindnet-894370" [4e815e32-4f86-4eea-a750-730e02035564] Running
	I0717 01:42:59.729536   55923 system_pods.go:61] "kube-controller-manager-kindnet-894370" [5874fe5e-b2bf-42dd-a961-d667ede7baca] Running
	I0717 01:42:59.729541   55923 system_pods.go:61] "kube-proxy-xjmxc" [1858afa5-0485-47c2-8850-303e206420a8] Running
	I0717 01:42:59.729551   55923 system_pods.go:61] "kube-scheduler-kindnet-894370" [e391956d-a048-4ddb-ba94-caf2b7e4277b] Running
	I0717 01:42:59.729557   55923 system_pods.go:61] "storage-provisioner" [dfa8d38a-802f-4bf1-b769-929118d399ae] Running
	I0717 01:42:59.729566   55923 system_pods.go:74] duration metric: took 177.539013ms to wait for pod list to return data ...
	I0717 01:42:59.729578   55923 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:42:59.922461   55923 default_sa.go:45] found service account: "default"
	I0717 01:42:59.922483   55923 default_sa.go:55] duration metric: took 192.895981ms for default service account to be created ...
	I0717 01:42:59.922494   55923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:43:00.126166   55923 system_pods.go:86] 9 kube-system pods found
	I0717 01:43:00.126202   55923 system_pods.go:89] "coredns-7db6d8ff4d-7kmmz" [dcf940c7-926d-4b26-9b7c-982e26ccf4e6] Running
	I0717 01:43:00.126210   55923 system_pods.go:89] "coredns-7db6d8ff4d-c8bzb" [eea52b3a-5e60-4927-b0d7-a54e08502f75] Running
	I0717 01:43:00.126216   55923 system_pods.go:89] "etcd-kindnet-894370" [ce3e78b4-4b89-4229-9657-27b901d63eba] Running
	I0717 01:43:00.126222   55923 system_pods.go:89] "kindnet-tjrjz" [5175b71b-f875-4cd6-b743-a3b9059ac1d5] Running
	I0717 01:43:00.126232   55923 system_pods.go:89] "kube-apiserver-kindnet-894370" [4e815e32-4f86-4eea-a750-730e02035564] Running
	I0717 01:43:00.126238   55923 system_pods.go:89] "kube-controller-manager-kindnet-894370" [5874fe5e-b2bf-42dd-a961-d667ede7baca] Running
	I0717 01:43:00.126244   55923 system_pods.go:89] "kube-proxy-xjmxc" [1858afa5-0485-47c2-8850-303e206420a8] Running
	I0717 01:43:00.126249   55923 system_pods.go:89] "kube-scheduler-kindnet-894370" [e391956d-a048-4ddb-ba94-caf2b7e4277b] Running
	I0717 01:43:00.126256   55923 system_pods.go:89] "storage-provisioner" [dfa8d38a-802f-4bf1-b769-929118d399ae] Running
	I0717 01:43:00.126263   55923 system_pods.go:126] duration metric: took 203.763898ms to wait for k8s-apps to be running ...
	I0717 01:43:00.126277   55923 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:43:00.126321   55923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:43:00.142767   55923 system_svc.go:56] duration metric: took 16.481266ms WaitForService to wait for kubelet
	I0717 01:43:00.142800   55923 kubeadm.go:582] duration metric: took 20.251879111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:43:00.142824   55923 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:43:00.323782   55923 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:43:00.323807   55923 node_conditions.go:123] node cpu capacity is 2
	I0717 01:43:00.323828   55923 node_conditions.go:105] duration metric: took 180.99892ms to run NodePressure ...
	I0717 01:43:00.323839   55923 start.go:241] waiting for startup goroutines ...
	I0717 01:43:00.323847   55923 start.go:246] waiting for cluster config update ...
	I0717 01:43:00.323856   55923 start.go:255] writing updated cluster config ...
	I0717 01:43:00.324071   55923 ssh_runner.go:195] Run: rm -f paused
	I0717 01:43:00.372345   55923 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:43:00.373906   55923 out.go:177] * Done! kubectl is now configured to use "kindnet-894370" cluster and "default" namespace by default
	I0717 01:42:58.601664   56553 pod_ready.go:92] pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.601691   56553 pod_ready.go:81] duration metric: took 8.007361038s for pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.601704   56553 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:00.607834   56553 pod_ready.go:102] pod "etcd-pause-056024" in "kube-system" namespace has status "Ready":"False"
	I0717 01:43:01.108852   56553 pod_ready.go:92] pod "etcd-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:01.108881   56553 pod_ready.go:81] duration metric: took 2.507168773s for pod "etcd-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.108893   56553 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.621820   56553 pod_ready.go:92] pod "kube-apiserver-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:01.621849   56553 pod_ready.go:81] duration metric: took 512.946854ms for pod "kube-apiserver-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.621859   56553 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.629480   56553 pod_ready.go:92] pod "kube-controller-manager-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:01.629502   56553 pod_ready.go:81] duration metric: took 7.636033ms for pod "kube-controller-manager-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.629514   56553 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w9cq7" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.638809   56553 pod_ready.go:92] pod "kube-proxy-w9cq7" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:01.638833   56553 pod_ready.go:81] duration metric: took 9.311979ms for pod "kube-proxy-w9cq7" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.638846   56553 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:59.699518   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:59.700010   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:59.700043   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:59.699930   56761 retry.go:31] will retry after 3.537615064s: waiting for machine to come up
	I0717 01:43:02.145127   56553 pod_ready.go:92] pod "kube-scheduler-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:02.145156   56553 pod_ready.go:81] duration metric: took 506.298418ms for pod "kube-scheduler-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:02.145164   56553 pod_ready.go:38] duration metric: took 11.558668479s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:43:02.145179   56553 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:43:02.160810   56553 ops.go:34] apiserver oom_adj: -16
	I0717 01:43:02.160837   56553 kubeadm.go:597] duration metric: took 19.202268942s to restartPrimaryControlPlane
	I0717 01:43:02.160848   56553 kubeadm.go:394] duration metric: took 19.291514343s to StartCluster
	I0717 01:43:02.160867   56553 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:02.160947   56553 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:43:02.162890   56553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:02.163179   56553 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:43:02.163837   56553 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:43:02.164219   56553 config.go:182] Loaded profile config "pause-056024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:43:02.165982   56553 out.go:177] * Enabled addons: 
	I0717 01:43:02.165982   56553 out.go:177] * Verifying Kubernetes components...
	I0717 01:43:02.167318   56553 addons.go:510] duration metric: took 3.481789ms for enable addons: enabled=[]
	I0717 01:43:02.167357   56553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:43:02.348726   56553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:43:02.367476   56553 node_ready.go:35] waiting up to 6m0s for node "pause-056024" to be "Ready" ...
	I0717 01:43:02.370773   56553 node_ready.go:49] node "pause-056024" has status "Ready":"True"
	I0717 01:43:02.370791   56553 node_ready.go:38] duration metric: took 3.28127ms for node "pause-056024" to be "Ready" ...
	I0717 01:43:02.370799   56553 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:43:02.375827   56553 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:02.705649   56553 pod_ready.go:92] pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:02.705672   56553 pod_ready.go:81] duration metric: took 329.824993ms for pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:02.705682   56553 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.106322   56553 pod_ready.go:92] pod "etcd-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:03.106354   56553 pod_ready.go:81] duration metric: took 400.663883ms for pod "etcd-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.106367   56553 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.505690   56553 pod_ready.go:92] pod "kube-apiserver-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:03.505718   56553 pod_ready.go:81] duration metric: took 399.342172ms for pod "kube-apiserver-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.505739   56553 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.906103   56553 pod_ready.go:92] pod "kube-controller-manager-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:03.906130   56553 pod_ready.go:81] duration metric: took 400.383398ms for pod "kube-controller-manager-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.906142   56553 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w9cq7" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:04.305471   56553 pod_ready.go:92] pod "kube-proxy-w9cq7" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:04.305498   56553 pod_ready.go:81] duration metric: took 399.349486ms for pod "kube-proxy-w9cq7" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:04.305510   56553 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:04.706208   56553 pod_ready.go:92] pod "kube-scheduler-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:04.706232   56553 pod_ready.go:81] duration metric: took 400.713751ms for pod "kube-scheduler-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:04.706243   56553 pod_ready.go:38] duration metric: took 2.335435085s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:43:04.706259   56553 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:43:04.706312   56553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:43:04.733707   56553 api_server.go:72] duration metric: took 2.57049832s to wait for apiserver process to appear ...
	I0717 01:43:04.733732   56553 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:43:04.733748   56553 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0717 01:43:04.738010   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0717 01:43:04.738872   56553 api_server.go:141] control plane version: v1.30.2
	I0717 01:43:04.738892   56553 api_server.go:131] duration metric: took 5.153768ms to wait for apiserver health ...
	I0717 01:43:04.738901   56553 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:43:04.907925   56553 system_pods.go:59] 6 kube-system pods found
	I0717 01:43:04.907952   56553 system_pods.go:61] "coredns-7db6d8ff4d-gkx7k" [f2471beb-346c-4784-a3c5-a5ecc6f8e8a6] Running
	I0717 01:43:04.907957   56553 system_pods.go:61] "etcd-pause-056024" [f2c958e6-5ec8-4981-bf04-604483fcde3f] Running
	I0717 01:43:04.907960   56553 system_pods.go:61] "kube-apiserver-pause-056024" [fa9af1c1-de22-4ee5-9829-94cc31bd33f3] Running
	I0717 01:43:04.907966   56553 system_pods.go:61] "kube-controller-manager-pause-056024" [f8aabc0f-39bf-4e32-a58d-4ba9c97108a4] Running
	I0717 01:43:04.907969   56553 system_pods.go:61] "kube-proxy-w9cq7" [f908608a-f77f-4653-86a3-1b535c9c6973] Running
	I0717 01:43:04.907972   56553 system_pods.go:61] "kube-scheduler-pause-056024" [386119a0-65b2-435a-bca3-0d3b198466d9] Running
	I0717 01:43:04.907977   56553 system_pods.go:74] duration metric: took 169.07147ms to wait for pod list to return data ...
	I0717 01:43:04.907985   56553 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:43:05.105641   56553 default_sa.go:45] found service account: "default"
	I0717 01:43:05.105668   56553 default_sa.go:55] duration metric: took 197.678061ms for default service account to be created ...
	I0717 01:43:05.105678   56553 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:43:05.307575   56553 system_pods.go:86] 6 kube-system pods found
	I0717 01:43:05.307606   56553 system_pods.go:89] "coredns-7db6d8ff4d-gkx7k" [f2471beb-346c-4784-a3c5-a5ecc6f8e8a6] Running
	I0717 01:43:05.307614   56553 system_pods.go:89] "etcd-pause-056024" [f2c958e6-5ec8-4981-bf04-604483fcde3f] Running
	I0717 01:43:05.307621   56553 system_pods.go:89] "kube-apiserver-pause-056024" [fa9af1c1-de22-4ee5-9829-94cc31bd33f3] Running
	I0717 01:43:05.307628   56553 system_pods.go:89] "kube-controller-manager-pause-056024" [f8aabc0f-39bf-4e32-a58d-4ba9c97108a4] Running
	I0717 01:43:05.307633   56553 system_pods.go:89] "kube-proxy-w9cq7" [f908608a-f77f-4653-86a3-1b535c9c6973] Running
	I0717 01:43:05.307638   56553 system_pods.go:89] "kube-scheduler-pause-056024" [386119a0-65b2-435a-bca3-0d3b198466d9] Running
	I0717 01:43:05.307647   56553 system_pods.go:126] duration metric: took 201.962051ms to wait for k8s-apps to be running ...
	I0717 01:43:05.307655   56553 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:43:05.307705   56553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:43:05.323078   56553 system_svc.go:56] duration metric: took 15.415444ms WaitForService to wait for kubelet
	I0717 01:43:05.323112   56553 kubeadm.go:582] duration metric: took 3.159906333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:43:05.323134   56553 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:43:05.506823   56553 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:43:05.506852   56553 node_conditions.go:123] node cpu capacity is 2
	I0717 01:43:05.506863   56553 node_conditions.go:105] duration metric: took 183.723935ms to run NodePressure ...
	I0717 01:43:05.506876   56553 start.go:241] waiting for startup goroutines ...
	I0717 01:43:05.506885   56553 start.go:246] waiting for cluster config update ...
	I0717 01:43:05.506896   56553 start.go:255] writing updated cluster config ...
	I0717 01:43:05.507190   56553 ssh_runner.go:195] Run: rm -f paused
	I0717 01:43:05.555390   56553 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:43:05.557471   56553 out.go:177] * Done! kubectl is now configured to use "pause-056024" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.170741556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180586170717462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=203183b2-2b1d-4a00-8e6c-40746f8c7b1f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.171852567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49bfac25-99da-4a8f-9854-ae0fc63296b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.171953135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49bfac25-99da-4a8f-9854-ae0fc63296b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.172533001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4f99740c21e53f87133bff2d52a0bd56c2a02626274b365bdaae0e12964a4e4,PodSandboxId:ad6e690168a8d0e7a28f39b6e9d0f6483d0e265ba094592206e447b3e5a0540d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180569898448501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76da7a10c63d2b3b3c03dbacafc0f9da88957e40b9d46e154cfebe79a45d8778,PodSandboxId:03e43a98177260e7689e310bd3db81fa7f9d28db9aedad14a8aafbd4729bf0e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721180569866736257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d49a1cb5b4ebc6a4f84beed352723268474608d2654cc6a59ef5325cda633f7,PodSandboxId:d202bf4689f3162765115a2531c5d943877ad7a0d48ae2a5f92d01e57371af1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721180565113247261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d04
72f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e870955bc1a988bf2a6a9f3a25805dfcfe58bc0abbeced196cfaf88a009195c,PodSandboxId:5a1e554720c05a07354ed18e5f5680e74fcdd66267bb05f5e14168e02631e194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180565077063086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 496b12d8541f498238ec070fdd540
8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8fd0610adffbaef05d966acf6cd7a87404f8b707eacd64e50b760417dd998,PodSandboxId:40a6cc729d8bbfcbb28535064584be1c17945b9b734f2321bd9892f5838f7f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180565045618313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernete
s.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bf2c3a47315825765e5622511b58b5ffa48c0f1c59003f1d67537e3fe66b9,PodSandboxId:c62cef6afd1e5c65fddc44ee1f9fa1f2de4af1bb56a60b9db5e8dc9dc0b739a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180565058599014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984,PodSandboxId:7ab5efb667a491f7aa8ded2cb2dbe76052536a0e23b72fb6b61fab371d1065cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180560461415952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc
453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9,PodSandboxId:31a5f1177affc0509ff51333a1dd6cfbc65ed9d9d3443eddd07922527abeef2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721180559262865079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42,PodSandboxId:2a701dfd17e7641a4b5b7bd95e07e38bf6d31f407159c4665459c3ca104bbe9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721180559253998870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d0472f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140,PodSandboxId:c0506fa9493537d49342fd9bc2f31bdcd78ea0b8d65e382e841d4e9f4466f2fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721180559209983237,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernetes.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17,PodSandboxId:e97f6874a2bb5477f6b7af4136ab489ede4834bf078507ac6e777d5e6f73c9a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721180559158317606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50,PodSandboxId:fdb935b244396e518b422332eece88c3a90198cb9dec6292365035d86bf00213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721180559078822574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 496b12d8541f498238ec070fdd5408cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49bfac25-99da-4a8f-9854-ae0fc63296b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.221092643Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29ceb69c-29f7-471a-baf0-03cca435e446 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.221214465Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29ceb69c-29f7-471a-baf0-03cca435e446 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.222876063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1ad05ba-9e52-4f24-971e-e22e80cc936f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.223453514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180586223418985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1ad05ba-9e52-4f24-971e-e22e80cc936f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.224092358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37783954-fae6-4548-9068-2fae0a365f4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.224148633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37783954-fae6-4548-9068-2fae0a365f4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.224554396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4f99740c21e53f87133bff2d52a0bd56c2a02626274b365bdaae0e12964a4e4,PodSandboxId:ad6e690168a8d0e7a28f39b6e9d0f6483d0e265ba094592206e447b3e5a0540d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180569898448501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76da7a10c63d2b3b3c03dbacafc0f9da88957e40b9d46e154cfebe79a45d8778,PodSandboxId:03e43a98177260e7689e310bd3db81fa7f9d28db9aedad14a8aafbd4729bf0e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721180569866736257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d49a1cb5b4ebc6a4f84beed352723268474608d2654cc6a59ef5325cda633f7,PodSandboxId:d202bf4689f3162765115a2531c5d943877ad7a0d48ae2a5f92d01e57371af1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721180565113247261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d04
72f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e870955bc1a988bf2a6a9f3a25805dfcfe58bc0abbeced196cfaf88a009195c,PodSandboxId:5a1e554720c05a07354ed18e5f5680e74fcdd66267bb05f5e14168e02631e194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180565077063086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 496b12d8541f498238ec070fdd540
8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8fd0610adffbaef05d966acf6cd7a87404f8b707eacd64e50b760417dd998,PodSandboxId:40a6cc729d8bbfcbb28535064584be1c17945b9b734f2321bd9892f5838f7f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180565045618313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernete
s.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bf2c3a47315825765e5622511b58b5ffa48c0f1c59003f1d67537e3fe66b9,PodSandboxId:c62cef6afd1e5c65fddc44ee1f9fa1f2de4af1bb56a60b9db5e8dc9dc0b739a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180565058599014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984,PodSandboxId:7ab5efb667a491f7aa8ded2cb2dbe76052536a0e23b72fb6b61fab371d1065cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180560461415952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc
453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9,PodSandboxId:31a5f1177affc0509ff51333a1dd6cfbc65ed9d9d3443eddd07922527abeef2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721180559262865079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42,PodSandboxId:2a701dfd17e7641a4b5b7bd95e07e38bf6d31f407159c4665459c3ca104bbe9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721180559253998870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d0472f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140,PodSandboxId:c0506fa9493537d49342fd9bc2f31bdcd78ea0b8d65e382e841d4e9f4466f2fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721180559209983237,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernetes.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17,PodSandboxId:e97f6874a2bb5477f6b7af4136ab489ede4834bf078507ac6e777d5e6f73c9a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721180559158317606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50,PodSandboxId:fdb935b244396e518b422332eece88c3a90198cb9dec6292365035d86bf00213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721180559078822574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 496b12d8541f498238ec070fdd5408cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37783954-fae6-4548-9068-2fae0a365f4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.275286133Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17d65ed1-c08f-4c77-afae-b80128edd7a7 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.275376575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17d65ed1-c08f-4c77-afae-b80128edd7a7 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.277300009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15f5b3b4-4370-41ac-91a6-ca14cae9d912 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.277842901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180586277808499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15f5b3b4-4370-41ac-91a6-ca14cae9d912 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.278744242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1128a8b8-b021-4f62-b5ce-b0678daf3f8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.278816328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1128a8b8-b021-4f62-b5ce-b0678daf3f8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.279130586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4f99740c21e53f87133bff2d52a0bd56c2a02626274b365bdaae0e12964a4e4,PodSandboxId:ad6e690168a8d0e7a28f39b6e9d0f6483d0e265ba094592206e447b3e5a0540d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180569898448501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76da7a10c63d2b3b3c03dbacafc0f9da88957e40b9d46e154cfebe79a45d8778,PodSandboxId:03e43a98177260e7689e310bd3db81fa7f9d28db9aedad14a8aafbd4729bf0e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721180569866736257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d49a1cb5b4ebc6a4f84beed352723268474608d2654cc6a59ef5325cda633f7,PodSandboxId:d202bf4689f3162765115a2531c5d943877ad7a0d48ae2a5f92d01e57371af1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721180565113247261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d04
72f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e870955bc1a988bf2a6a9f3a25805dfcfe58bc0abbeced196cfaf88a009195c,PodSandboxId:5a1e554720c05a07354ed18e5f5680e74fcdd66267bb05f5e14168e02631e194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180565077063086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 496b12d8541f498238ec070fdd540
8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8fd0610adffbaef05d966acf6cd7a87404f8b707eacd64e50b760417dd998,PodSandboxId:40a6cc729d8bbfcbb28535064584be1c17945b9b734f2321bd9892f5838f7f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180565045618313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernete
s.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bf2c3a47315825765e5622511b58b5ffa48c0f1c59003f1d67537e3fe66b9,PodSandboxId:c62cef6afd1e5c65fddc44ee1f9fa1f2de4af1bb56a60b9db5e8dc9dc0b739a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180565058599014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984,PodSandboxId:7ab5efb667a491f7aa8ded2cb2dbe76052536a0e23b72fb6b61fab371d1065cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180560461415952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc
453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9,PodSandboxId:31a5f1177affc0509ff51333a1dd6cfbc65ed9d9d3443eddd07922527abeef2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721180559262865079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42,PodSandboxId:2a701dfd17e7641a4b5b7bd95e07e38bf6d31f407159c4665459c3ca104bbe9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721180559253998870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d0472f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140,PodSandboxId:c0506fa9493537d49342fd9bc2f31bdcd78ea0b8d65e382e841d4e9f4466f2fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721180559209983237,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernetes.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17,PodSandboxId:e97f6874a2bb5477f6b7af4136ab489ede4834bf078507ac6e777d5e6f73c9a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721180559158317606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50,PodSandboxId:fdb935b244396e518b422332eece88c3a90198cb9dec6292365035d86bf00213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721180559078822574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 496b12d8541f498238ec070fdd5408cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1128a8b8-b021-4f62-b5ce-b0678daf3f8e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.324742233Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=160dea9d-288f-43ac-a2cb-8399ec19bca3 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.324863705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=160dea9d-288f-43ac-a2cb-8399ec19bca3 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.327664524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0112581e-7781-4dda-9849-d851cafe19e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.328549970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180586328519221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0112581e-7781-4dda-9849-d851cafe19e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.332767380Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51c95151-4400-4f02-96c7-e6beebe2e60f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.332921200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51c95151-4400-4f02-96c7-e6beebe2e60f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:06 pause-056024 crio[2971]: time="2024-07-17 01:43:06.333318163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4f99740c21e53f87133bff2d52a0bd56c2a02626274b365bdaae0e12964a4e4,PodSandboxId:ad6e690168a8d0e7a28f39b6e9d0f6483d0e265ba094592206e447b3e5a0540d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180569898448501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76da7a10c63d2b3b3c03dbacafc0f9da88957e40b9d46e154cfebe79a45d8778,PodSandboxId:03e43a98177260e7689e310bd3db81fa7f9d28db9aedad14a8aafbd4729bf0e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721180569866736257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d49a1cb5b4ebc6a4f84beed352723268474608d2654cc6a59ef5325cda633f7,PodSandboxId:d202bf4689f3162765115a2531c5d943877ad7a0d48ae2a5f92d01e57371af1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721180565113247261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d04
72f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e870955bc1a988bf2a6a9f3a25805dfcfe58bc0abbeced196cfaf88a009195c,PodSandboxId:5a1e554720c05a07354ed18e5f5680e74fcdd66267bb05f5e14168e02631e194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180565077063086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 496b12d8541f498238ec070fdd540
8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8fd0610adffbaef05d966acf6cd7a87404f8b707eacd64e50b760417dd998,PodSandboxId:40a6cc729d8bbfcbb28535064584be1c17945b9b734f2321bd9892f5838f7f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180565045618313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernete
s.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bf2c3a47315825765e5622511b58b5ffa48c0f1c59003f1d67537e3fe66b9,PodSandboxId:c62cef6afd1e5c65fddc44ee1f9fa1f2de4af1bb56a60b9db5e8dc9dc0b739a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180565058599014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984,PodSandboxId:7ab5efb667a491f7aa8ded2cb2dbe76052536a0e23b72fb6b61fab371d1065cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180560461415952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc
453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9,PodSandboxId:31a5f1177affc0509ff51333a1dd6cfbc65ed9d9d3443eddd07922527abeef2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721180559262865079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42,PodSandboxId:2a701dfd17e7641a4b5b7bd95e07e38bf6d31f407159c4665459c3ca104bbe9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721180559253998870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d0472f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140,PodSandboxId:c0506fa9493537d49342fd9bc2f31bdcd78ea0b8d65e382e841d4e9f4466f2fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721180559209983237,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernetes.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17,PodSandboxId:e97f6874a2bb5477f6b7af4136ab489ede4834bf078507ac6e777d5e6f73c9a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721180559158317606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50,PodSandboxId:fdb935b244396e518b422332eece88c3a90198cb9dec6292365035d86bf00213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721180559078822574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 496b12d8541f498238ec070fdd5408cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51c95151-4400-4f02-96c7-e6beebe2e60f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e4f99740c21e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 seconds ago      Running             coredns                   2                   ad6e690168a8d       coredns-7db6d8ff4d-gkx7k
	76da7a10c63d2       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   16 seconds ago      Running             kube-proxy                2                   03e43a9817726       kube-proxy-w9cq7
	4d49a1cb5b4eb       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   21 seconds ago      Running             kube-scheduler            2                   d202bf4689f31       kube-scheduler-pause-056024
	8e870955bc1a9       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   21 seconds ago      Running             kube-apiserver            2                   5a1e554720c05       kube-apiserver-pause-056024
	796bf2c3a4731       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   21 seconds ago      Running             kube-controller-manager   2                   c62cef6afd1e5       kube-controller-manager-pause-056024
	24f8fd0610adf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago      Running             etcd                      2                   40a6cc729d8bb       etcd-pause-056024
	6a13e07f93ced       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   25 seconds ago      Exited              coredns                   1                   7ab5efb667a49       coredns-7db6d8ff4d-gkx7k
	27a9da40fbea1       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   27 seconds ago      Exited              kube-proxy                1                   31a5f1177affc       kube-proxy-w9cq7
	a70a0091dd4e2       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   27 seconds ago      Exited              kube-scheduler            1                   2a701dfd17e76       kube-scheduler-pause-056024
	cbeb675206edf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago      Exited              etcd                      1                   c0506fa949353       etcd-pause-056024
	2c21cec71b2cc       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   27 seconds ago      Exited              kube-controller-manager   1                   e97f6874a2bb5       kube-controller-manager-pause-056024
	b87f6da726d79       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   27 seconds ago      Exited              kube-apiserver            1                   fdb935b244396       kube-apiserver-pause-056024
	
	
	==> coredns [6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984] <==
	
	
	==> coredns [e4f99740c21e53f87133bff2d52a0bd56c2a02626274b365bdaae0e12964a4e4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49518 - 57538 "HINFO IN 4376202363426571550.341898290947266577. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009747901s
	
	
	==> describe nodes <==
	Name:               pause-056024
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-056024
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=pause-056024
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_41_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:41:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-056024
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:42:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:42:48 +0000   Wed, 17 Jul 2024 01:41:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:42:48 +0000   Wed, 17 Jul 2024 01:41:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:42:48 +0000   Wed, 17 Jul 2024 01:41:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:42:48 +0000   Wed, 17 Jul 2024 01:41:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    pause-056024
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 6573976077fc4482a978cc8d60479bde
	  System UUID:                65739760-77fc-4482-a978-cc8d60479bde
	  Boot ID:                    a8c6bc98-f631-4c6c-8d3d-8514a725b1b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-gkx7k                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     76s
	  kube-system                 etcd-pause-056024                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         90s
	  kube-system                 kube-apiserver-pause-056024             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-pause-056024    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-proxy-w9cq7                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-pause-056024             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s (x8 over 96s)  kubelet          Node pause-056024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s (x8 over 96s)  kubelet          Node pause-056024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x7 over 96s)  kubelet          Node pause-056024 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node pause-056024 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node pause-056024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     90s                kubelet          Node pause-056024 status is now: NodeHasSufficientPID
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeReady                89s                kubelet          Node pause-056024 status is now: NodeReady
	  Normal  RegisteredNode           78s                node-controller  Node pause-056024 event: Registered Node pause-056024 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-056024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-056024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-056024 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node pause-056024 event: Registered Node pause-056024 in Controller
	
	
	==> dmesg <==
	[  +9.176616] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.124725] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.183401] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.123778] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.265696] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.341546] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.057643] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.394110] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.613823] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.939539] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.086139] kauditd_printk_skb: 37 callbacks suppressed
	[ +13.839790] systemd-fstab-generator[1501]: Ignoring "noauto" option for root device
	[  +0.163494] kauditd_printk_skb: 21 callbacks suppressed
	[Jul17 01:42] kauditd_printk_skb: 89 callbacks suppressed
	[ +37.143079] systemd-fstab-generator[2746]: Ignoring "noauto" option for root device
	[  +0.293113] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.257647] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.228572] systemd-fstab-generator[2861]: Ignoring "noauto" option for root device
	[  +0.481548] systemd-fstab-generator[2953]: Ignoring "noauto" option for root device
	[  +1.060007] systemd-fstab-generator[3223]: Ignoring "noauto" option for root device
	[  +2.414185] systemd-fstab-generator[3661]: Ignoring "noauto" option for root device
	[  +0.101234] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.550985] kauditd_printk_skb: 38 callbacks suppressed
	[Jul17 01:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.236372] systemd-fstab-generator[4101]: Ignoring "noauto" option for root device
	
	
	==> etcd [24f8fd0610adffbaef05d966acf6cd7a87404f8b707eacd64e50b760417dd998] <==
	{"level":"info","ts":"2024-07-17T01:42:45.540786Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:42:45.540816Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:42:45.541067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=(17735085251460689206)"}
	{"level":"info","ts":"2024-07-17T01:42:45.541151Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","added-peer-id":"f61fae125a956d36","added-peer-peer-urls":["https://192.168.39.97:2380"]}
	{"level":"info","ts":"2024-07-17T01:42:45.545388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:42:45.545447Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:42:45.564148Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:42:45.566839Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:42:45.566323Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-07-17T01:42:45.569235Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-07-17T01:42:45.569269Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:42:47.379871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:42:47.379951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:42:47.379987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgPreVoteResp from f61fae125a956d36 at term 2"}
	{"level":"info","ts":"2024-07-17T01:42:47.380002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:42:47.38001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgVoteResp from f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2024-07-17T01:42:47.380022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:42:47.380032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f61fae125a956d36 elected leader f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2024-07-17T01:42:47.385452Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f61fae125a956d36","local-member-attributes":"{Name:pause-056024 ClientURLs:[https://192.168.39.97:2379]}","request-path":"/0/members/f61fae125a956d36/attributes","cluster-id":"6e56e32a1e97f390","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:42:47.38546Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:42:47.385884Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:42:47.385938Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:42:47.385975Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:42:47.389021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:42:47.393516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.97:2379"}
	
	
	==> etcd [cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140] <==
	{"level":"info","ts":"2024-07-17T01:42:39.915271Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"52.211959ms"}
	{"level":"info","ts":"2024-07-17T01:42:39.973836Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-17T01:42:40.045379Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","commit-index":465}
	{"level":"info","ts":"2024-07-17T01:42:40.051564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-17T01:42:40.052895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became follower at term 2"}
	{"level":"info","ts":"2024-07-17T01:42:40.053578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f61fae125a956d36 [peers: [], term: 2, commit: 465, applied: 0, lastindex: 465, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-17T01:42:40.065327Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-17T01:42:40.112868Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":442}
	{"level":"info","ts":"2024-07-17T01:42:40.1291Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-17T01:42:40.152081Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"f61fae125a956d36","timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:42:40.152764Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"f61fae125a956d36"}
	{"level":"info","ts":"2024-07-17T01:42:40.152867Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"f61fae125a956d36","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-17T01:42:40.153377Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-17T01:42:40.15369Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:42:40.169237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:42:40.169266Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:42:40.153881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=(17735085251460689206)"}
	{"level":"info","ts":"2024-07-17T01:42:40.169541Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","added-peer-id":"f61fae125a956d36","added-peer-peer-urls":["https://192.168.39.97:2380"]}
	{"level":"info","ts":"2024-07-17T01:42:40.169652Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:42:40.169679Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:42:40.228585Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:42:40.228953Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:42:40.22901Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:42:40.229091Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-07-17T01:42:40.229122Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.97:2380"}
	
	
	==> kernel <==
	 01:43:06 up 2 min,  0 users,  load average: 1.02, 0.35, 0.12
	Linux pause-056024 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8e870955bc1a988bf2a6a9f3a25805dfcfe58bc0abbeced196cfaf88a009195c] <==
	I0717 01:42:48.792248       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0717 01:42:48.843622       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:42:48.846256       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 01:42:48.847083       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 01:42:48.847132       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 01:42:48.850110       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 01:42:48.862582       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 01:42:48.892847       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 01:42:48.898153       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 01:42:48.898245       1 policy_source.go:224] refreshing policies
	I0717 01:42:48.898816       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:42:48.901045       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 01:42:48.901083       1 aggregator.go:165] initial CRD sync complete...
	I0717 01:42:48.901097       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 01:42:48.901102       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 01:42:48.901108       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:42:48.949364       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:42:49.747822       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:42:50.409822       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:42:50.427555       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:42:50.467602       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:42:50.509082       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:42:50.517443       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:43:01.916834       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:43:01.972069       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50] <==
	I0717 01:42:39.586135       1 options.go:221] external host was not specified, using 192.168.39.97
	I0717 01:42:39.588017       1 server.go:148] Version: v1.30.2
	I0717 01:42:39.588078       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17] <==
	
	
	==> kube-controller-manager [796bf2c3a47315825765e5622511b58b5ffa48c0f1c59003f1d67537e3fe66b9] <==
	I0717 01:43:01.805569       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0717 01:43:01.806623       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 01:43:01.813123       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 01:43:01.817373       1 shared_informer.go:320] Caches are synced for HPA
	I0717 01:43:01.829492       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 01:43:01.829648       1 shared_informer.go:320] Caches are synced for endpoint
	I0717 01:43:01.831530       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0717 01:43:01.832057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="158.675µs"
	I0717 01:43:01.832384       1 shared_informer.go:320] Caches are synced for job
	I0717 01:43:01.834897       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 01:43:01.836382       1 shared_informer.go:320] Caches are synced for PVC protection
	I0717 01:43:01.838632       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 01:43:01.843602       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 01:43:01.846943       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0717 01:43:01.849380       1 shared_informer.go:320] Caches are synced for GC
	I0717 01:43:01.853667       1 shared_informer.go:320] Caches are synced for disruption
	I0717 01:43:01.863389       1 shared_informer.go:320] Caches are synced for taint
	I0717 01:43:01.863864       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0717 01:43:01.863961       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-056024"
	I0717 01:43:01.864014       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0717 01:43:01.866908       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:43:01.871261       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:43:02.280529       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:43:02.280579       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 01:43:02.315118       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9] <==
	
	
	==> kube-proxy [76da7a10c63d2b3b3c03dbacafc0f9da88957e40b9d46e154cfebe79a45d8778] <==
	I0717 01:42:50.139578       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:42:50.166679       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	I0717 01:42:50.210946       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:42:50.211113       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:42:50.211202       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:42:50.214489       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:42:50.214663       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:42:50.214699       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:42:50.216230       1 config.go:192] "Starting service config controller"
	I0717 01:42:50.216265       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:42:50.216337       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:42:50.216342       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:42:50.216803       1 config.go:319] "Starting node config controller"
	I0717 01:42:50.216835       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:42:50.317456       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:42:50.317504       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:42:50.317754       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d49a1cb5b4ebc6a4f84beed352723268474608d2654cc6a59ef5325cda633f7] <==
	I0717 01:42:46.272752       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:42:48.801589       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:42:48.801784       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:42:48.801894       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:42:48.801938       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:42:48.880981       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:42:48.881061       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:42:48.888024       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:42:48.890371       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:42:48.890460       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:42:48.890514       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:42:48.991251       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42] <==
	
	
	==> kubelet <==
	Jul 17 01:42:44 pause-056024 kubelet[3668]: I0717 01:42:44.774100    3668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bac7706a3d4d40242d264e09577770c8-flexvolume-dir\") pod \"kube-controller-manager-pause-056024\" (UID: \"bac7706a3d4d40242d264e09577770c8\") " pod="kube-system/kube-controller-manager-pause-056024"
	Jul 17 01:42:44 pause-056024 kubelet[3668]: I0717 01:42:44.774125    3668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bac7706a3d4d40242d264e09577770c8-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-056024\" (UID: \"bac7706a3d4d40242d264e09577770c8\") " pod="kube-system/kube-controller-manager-pause-056024"
	Jul 17 01:42:44 pause-056024 kubelet[3668]: I0717 01:42:44.887098    3668 kubelet_node_status.go:73] "Attempting to register node" node="pause-056024"
	Jul 17 01:42:44 pause-056024 kubelet[3668]: E0717 01:42:44.888426    3668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.97:8443: connect: connection refused" node="pause-056024"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: I0717 01:42:45.022327    3668 scope.go:117] "RemoveContainer" containerID="cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: I0717 01:42:45.025502    3668 scope.go:117] "RemoveContainer" containerID="b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: I0717 01:42:45.027944    3668 scope.go:117] "RemoveContainer" containerID="a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: I0717 01:42:45.029365    3668 scope.go:117] "RemoveContainer" containerID="2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: E0717 01:42:45.171761    3668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-056024?timeout=10s\": dial tcp 192.168.39.97:8443: connect: connection refused" interval="800ms"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: I0717 01:42:45.295635    3668 kubelet_node_status.go:73] "Attempting to register node" node="pause-056024"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: E0717 01:42:45.297390    3668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.97:8443: connect: connection refused" node="pause-056024"
	Jul 17 01:42:46 pause-056024 kubelet[3668]: I0717 01:42:46.098745    3668 kubelet_node_status.go:73] "Attempting to register node" node="pause-056024"
	Jul 17 01:42:48 pause-056024 kubelet[3668]: I0717 01:42:48.980636    3668 kubelet_node_status.go:112] "Node was previously registered" node="pause-056024"
	Jul 17 01:42:48 pause-056024 kubelet[3668]: I0717 01:42:48.981029    3668 kubelet_node_status.go:76] "Successfully registered node" node="pause-056024"
	Jul 17 01:42:48 pause-056024 kubelet[3668]: I0717 01:42:48.982771    3668 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 01:42:48 pause-056024 kubelet[3668]: I0717 01:42:48.983930    3668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.549669    3668 apiserver.go:52] "Watching apiserver"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.553218    3668 topology_manager.go:215] "Topology Admit Handler" podUID="f2471beb-346c-4784-a3c5-a5ecc6f8e8a6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gkx7k"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.553592    3668 topology_manager.go:215] "Topology Admit Handler" podUID="f908608a-f77f-4653-86a3-1b535c9c6973" podNamespace="kube-system" podName="kube-proxy-w9cq7"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.564393    3668 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.610010    3668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f908608a-f77f-4653-86a3-1b535c9c6973-xtables-lock\") pod \"kube-proxy-w9cq7\" (UID: \"f908608a-f77f-4653-86a3-1b535c9c6973\") " pod="kube-system/kube-proxy-w9cq7"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.610292    3668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f908608a-f77f-4653-86a3-1b535c9c6973-lib-modules\") pod \"kube-proxy-w9cq7\" (UID: \"f908608a-f77f-4653-86a3-1b535c9c6973\") " pod="kube-system/kube-proxy-w9cq7"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.854405    3668 scope.go:117] "RemoveContainer" containerID="27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.854706    3668 scope.go:117] "RemoveContainer" containerID="6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984"
	Jul 17 01:42:58 pause-056024 kubelet[3668]: I0717 01:42:58.312267    3668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-056024 -n pause-056024
helpers_test.go:261: (dbg) Run:  kubectl --context pause-056024 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-056024 -n pause-056024
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-056024 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-056024 logs -n 25: (1.530745619s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | cert-options-366095 ssh               | cert-options-366095       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC | 17 Jul 24 01:38 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-366095 -- sudo        | cert-options-366095       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC | 17 Jul 24 01:38 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-366095                | cert-options-366095       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC | 17 Jul 24 01:38 UTC |
	| start   | -p kubernetes-upgrade-572332          | kubernetes-upgrade-572332 | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-130517 sudo           | NoKubernetes-130517       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-130517                | NoKubernetes-130517       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC | 17 Jul 24 01:38 UTC |
	| start   | -p NoKubernetes-130517                | NoKubernetes-130517       | jenkins | v1.33.1 | 17 Jul 24 01:38 UTC | 17 Jul 24 01:39 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-130517 sudo           | NoKubernetes-130517       | jenkins | v1.33.1 | 17 Jul 24 01:39 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-130517                | NoKubernetes-130517       | jenkins | v1.33.1 | 17 Jul 24 01:39 UTC | 17 Jul 24 01:39 UTC |
	| start   | -p stopped-upgrade-156268             | minikube                  | jenkins | v1.26.0 | 17 Jul 24 01:39 UTC | 17 Jul 24 01:40 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p running-upgrade-777345             | running-upgrade-777345    | jenkins | v1.33.1 | 17 Jul 24 01:39 UTC | 17 Jul 24 01:41 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-733994             | cert-expiration-733994    | jenkins | v1.33.1 | 17 Jul 24 01:40 UTC | 17 Jul 24 01:40 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-156268 stop           | minikube                  | jenkins | v1.26.0 | 17 Jul 24 01:40 UTC | 17 Jul 24 01:40 UTC |
	| start   | -p stopped-upgrade-156268             | stopped-upgrade-156268    | jenkins | v1.33.1 | 17 Jul 24 01:40 UTC | 17 Jul 24 01:41 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-733994             | cert-expiration-733994    | jenkins | v1.33.1 | 17 Jul 24 01:40 UTC | 17 Jul 24 01:40 UTC |
	| start   | -p pause-056024 --memory=2048         | pause-056024              | jenkins | v1.33.1 | 17 Jul 24 01:40 UTC | 17 Jul 24 01:42 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-156268             | stopped-upgrade-156268    | jenkins | v1.33.1 | 17 Jul 24 01:41 UTC | 17 Jul 24 01:41 UTC |
	| start   | -p auto-894370 --memory=3072          | auto-894370               | jenkins | v1.33.1 | 17 Jul 24 01:41 UTC | 17 Jul 24 01:42 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-777345             | running-upgrade-777345    | jenkins | v1.33.1 | 17 Jul 24 01:41 UTC | 17 Jul 24 01:41 UTC |
	| start   | -p kindnet-894370                     | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:41 UTC | 17 Jul 24 01:43 UTC |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-056024                       | pause-056024              | jenkins | v1.33.1 | 17 Jul 24 01:42 UTC | 17 Jul 24 01:43 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-572332          | kubernetes-upgrade-572332 | jenkins | v1.33.1 | 17 Jul 24 01:42 UTC | 17 Jul 24 01:42 UTC |
	| start   | -p kubernetes-upgrade-572332          | kubernetes-upgrade-572332 | jenkins | v1.33.1 | 17 Jul 24 01:42 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-894370 pgrep -a               | auto-894370               | jenkins | v1.33.1 | 17 Jul 24 01:42 UTC | 17 Jul 24 01:42 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-894370 pgrep -a            | kindnet-894370            | jenkins | v1.33.1 | 17 Jul 24 01:43 UTC | 17 Jul 24 01:43 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:42:46
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:42:46.990926   56726 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:42:46.991521   56726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:42:46.991573   56726 out.go:304] Setting ErrFile to fd 2...
	I0717 01:42:46.991590   56726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:42:46.992073   56726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:42:46.992977   56726 out.go:298] Setting JSON to false
	I0717 01:42:46.993942   56726 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5109,"bootTime":1721175458,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:42:46.994002   56726 start.go:139] virtualization: kvm guest
	I0717 01:42:46.996077   56726 out.go:177] * [kubernetes-upgrade-572332] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:42:46.997711   56726 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:42:46.997719   56726 notify.go:220] Checking for updates...
	I0717 01:42:46.999041   56726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:42:47.000479   56726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:42:47.002040   56726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:42:47.003399   56726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:42:47.004841   56726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:42:47.006825   56726 config.go:182] Loaded profile config "kubernetes-upgrade-572332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:42:47.007435   56726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:42:47.007514   56726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:42:47.022670   56726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37221
	I0717 01:42:47.023056   56726 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:42:47.023660   56726 main.go:141] libmachine: Using API Version  1
	I0717 01:42:47.023684   56726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:42:47.024024   56726 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:42:47.024184   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:42:47.024388   56726 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:42:47.024663   56726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:42:47.024694   56726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:42:47.039790   56726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I0717 01:42:47.040230   56726 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:42:47.040732   56726 main.go:141] libmachine: Using API Version  1
	I0717 01:42:47.040758   56726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:42:47.041058   56726 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:42:47.041220   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:42:47.075668   56726 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:42:47.076943   56726 start.go:297] selected driver: kvm2
	I0717 01:42:47.076966   56726 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-572332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-572332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:42:47.077086   56726 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:42:47.077815   56726 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:42:47.077884   56726 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:42:47.092365   56726 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:42:47.092755   56726 cni.go:84] Creating CNI manager for ""
	I0717 01:42:47.092771   56726 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:42:47.092811   56726 start.go:340] cluster config:
	{Name:kubernetes-upgrade-572332 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-572332 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.73 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:42:47.092914   56726 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:42:47.094788   56726 out.go:177] * Starting "kubernetes-upgrade-572332" primary control-plane node in "kubernetes-upgrade-572332" cluster
	I0717 01:42:47.096040   56726 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:42:47.096072   56726 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:42:47.096087   56726 cache.go:56] Caching tarball of preloaded images
	I0717 01:42:47.096160   56726 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:42:47.096171   56726 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 01:42:47.096254   56726 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/config.json ...
	I0717 01:42:47.096419   56726 start.go:360] acquireMachinesLock for kubernetes-upgrade-572332: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:42:47.096463   56726 start.go:364] duration metric: took 27.008µs to acquireMachinesLock for "kubernetes-upgrade-572332"
	I0717 01:42:47.096477   56726 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:42:47.096484   56726 fix.go:54] fixHost starting: 
	I0717 01:42:47.096750   56726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:42:47.096778   56726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:42:47.111538   56726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46215
	I0717 01:42:47.111975   56726 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:42:47.112431   56726 main.go:141] libmachine: Using API Version  1
	I0717 01:42:47.112459   56726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:42:47.112818   56726 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:42:47.113003   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:42:47.113161   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetState
	I0717 01:42:47.114766   56726 fix.go:112] recreateIfNeeded on kubernetes-upgrade-572332: state=Stopped err=<nil>
	I0717 01:42:47.114791   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	W0717 01:42:47.114954   56726 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:42:47.116607   56726 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-572332" ...
	I0717 01:42:43.022694   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:45.523124   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:48.812297   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:42:48.812328   56553 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:42:48.812345   56553 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0717 01:42:48.842316   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:42:48.842354   56553 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:42:49.217413   56553 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0717 01:42:49.224330   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:42:49.224355   56553 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:42:49.717977   56553 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0717 01:42:49.723280   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:42:49.723306   56553 api_server.go:103] status: https://192.168.39.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:42:50.217848   56553 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0717 01:42:50.222773   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0717 01:42:50.233624   56553 api_server.go:141] control plane version: v1.30.2
	I0717 01:42:50.233657   56553 api_server.go:131] duration metric: took 4.516383523s to wait for apiserver health ...
	I0717 01:42:50.233669   56553 cni.go:84] Creating CNI manager for ""
	I0717 01:42:50.233677   56553 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:42:50.235367   56553 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:42:47.152171   55685 pod_ready.go:102] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:49.652369   55685 pod_ready.go:102] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:50.236838   56553 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:42:50.257695   56553 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:42:50.282541   56553 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:42:50.295280   56553 system_pods.go:59] 6 kube-system pods found
	I0717 01:42:50.295332   56553 system_pods.go:61] "coredns-7db6d8ff4d-gkx7k" [f2471beb-346c-4784-a3c5-a5ecc6f8e8a6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:42:50.295350   56553 system_pods.go:61] "etcd-pause-056024" [f2c958e6-5ec8-4981-bf04-604483fcde3f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:42:50.295361   56553 system_pods.go:61] "kube-apiserver-pause-056024" [fa9af1c1-de22-4ee5-9829-94cc31bd33f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:42:50.295374   56553 system_pods.go:61] "kube-controller-manager-pause-056024" [f8aabc0f-39bf-4e32-a58d-4ba9c97108a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:42:50.295381   56553 system_pods.go:61] "kube-proxy-w9cq7" [f908608a-f77f-4653-86a3-1b535c9c6973] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:42:50.295398   56553 system_pods.go:61] "kube-scheduler-pause-056024" [386119a0-65b2-435a-bca3-0d3b198466d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:42:50.295409   56553 system_pods.go:74] duration metric: took 12.831601ms to wait for pod list to return data ...
	I0717 01:42:50.295421   56553 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:42:50.303589   56553 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:42:50.303617   56553 node_conditions.go:123] node cpu capacity is 2
	I0717 01:42:50.303626   56553 node_conditions.go:105] duration metric: took 8.200431ms to run NodePressure ...
	I0717 01:42:50.303642   56553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:42:50.581641   56553 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:42:50.586457   56553 kubeadm.go:739] kubelet initialised
	I0717 01:42:50.586480   56553 kubeadm.go:740] duration metric: took 4.814288ms waiting for restarted kubelet to initialise ...
	I0717 01:42:50.586487   56553 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:50.594301   56553 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:47.117813   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .Start
	I0717 01:42:47.117974   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Ensuring networks are active...
	I0717 01:42:47.118720   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Ensuring network default is active
	I0717 01:42:47.119095   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Ensuring network mk-kubernetes-upgrade-572332 is active
	I0717 01:42:47.119564   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Getting domain xml...
	I0717 01:42:47.120283   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Creating domain...
	I0717 01:42:48.429542   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Waiting to get IP...
	I0717 01:42:48.430720   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:48.432277   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:48.432311   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:48.432230   56761 retry.go:31] will retry after 295.069451ms: waiting for machine to come up
	I0717 01:42:48.728840   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:48.729539   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:48.729567   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:48.729493   56761 retry.go:31] will retry after 280.403381ms: waiting for machine to come up
	I0717 01:42:49.012047   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.012518   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.012545   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:49.012467   56761 retry.go:31] will retry after 447.434458ms: waiting for machine to come up
	I0717 01:42:49.460984   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.461614   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.461640   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:49.461550   56761 retry.go:31] will retry after 494.900191ms: waiting for machine to come up
	I0717 01:42:49.958521   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.959123   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:49.959149   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:49.959078   56761 retry.go:31] will retry after 572.895268ms: waiting for machine to come up
	I0717 01:42:50.533893   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:50.534397   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:50.534424   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:50.534348   56761 retry.go:31] will retry after 846.063347ms: waiting for machine to come up
	I0717 01:42:51.382151   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:51.382656   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:51.382678   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:51.382611   56761 retry.go:31] will retry after 806.363036ms: waiting for machine to come up
	I0717 01:42:47.523253   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:50.023476   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:52.151857   55685 pod_ready.go:102] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:54.651093   55685 pod_ready.go:102] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:52.601856   56553 pod_ready.go:102] pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:54.602418   56553 pod_ready.go:102] pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:56.602511   56553 pod_ready.go:102] pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:52.190775   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:52.191232   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:52.191269   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:52.191198   56761 retry.go:31] will retry after 1.023150099s: waiting for machine to come up
	I0717 01:42:53.215981   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:53.216572   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:53.216610   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:53.216524   56761 retry.go:31] will retry after 1.472682341s: waiting for machine to come up
	I0717 01:42:54.690501   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:54.691031   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:54.691056   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:54.690979   56761 retry.go:31] will retry after 2.283481718s: waiting for machine to come up
	I0717 01:42:56.977468   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:56.978087   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:56.978123   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:56.978024   56761 retry.go:31] will retry after 2.71877136s: waiting for machine to come up
	I0717 01:42:52.522256   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:54.522301   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:56.522883   55923 node_ready.go:53] node "kindnet-894370" has status "Ready":"False"
	I0717 01:42:56.651544   55685 pod_ready.go:102] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"False"
	I0717 01:42:57.153099   55685 pod_ready.go:92] pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.153125   55685 pod_ready.go:81] duration metric: took 41.508389081s for pod "coredns-7db6d8ff4d-4q8q7" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.153138   55685 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-zt5wx" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.155085   55685 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-zt5wx" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-zt5wx" not found
	I0717 01:42:57.155112   55685 pod_ready.go:81] duration metric: took 1.966636ms for pod "coredns-7db6d8ff4d-zt5wx" in "kube-system" namespace to be "Ready" ...
	E0717 01:42:57.155122   55685 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-zt5wx" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-zt5wx" not found
	I0717 01:42:57.155127   55685 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.160312   55685 pod_ready.go:92] pod "etcd-auto-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.160332   55685 pod_ready.go:81] duration metric: took 5.198765ms for pod "etcd-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.160342   55685 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.164432   55685 pod_ready.go:92] pod "kube-apiserver-auto-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.164448   55685 pod_ready.go:81] duration metric: took 4.098742ms for pod "kube-apiserver-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.164455   55685 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.168472   55685 pod_ready.go:92] pod "kube-controller-manager-auto-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.168491   55685 pod_ready.go:81] duration metric: took 4.028636ms for pod "kube-controller-manager-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.168501   55685 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-lq55v" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.351052   55685 pod_ready.go:92] pod "kube-proxy-lq55v" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.351085   55685 pod_ready.go:81] duration metric: took 182.575634ms for pod "kube-proxy-lq55v" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.351098   55685 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.750123   55685 pod_ready.go:92] pod "kube-scheduler-auto-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:57.750151   55685 pod_ready.go:81] duration metric: took 399.045886ms for pod "kube-scheduler-auto-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:57.750204   55685 pod_ready.go:38] duration metric: took 42.113651427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:57.750226   55685 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:42:57.750283   55685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:42:57.771681   55685 api_server.go:72] duration metric: took 42.670921869s to wait for apiserver process to appear ...
	I0717 01:42:57.771758   55685 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:42:57.771785   55685 api_server.go:253] Checking apiserver healthz at https://192.168.50.138:8443/healthz ...
	I0717 01:42:57.782236   55685 api_server.go:279] https://192.168.50.138:8443/healthz returned 200:
	ok
	I0717 01:42:57.784476   55685 api_server.go:141] control plane version: v1.30.2
	I0717 01:42:57.784543   55685 api_server.go:131] duration metric: took 12.77429ms to wait for apiserver health ...
	I0717 01:42:57.784555   55685 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:42:57.953860   55685 system_pods.go:59] 7 kube-system pods found
	I0717 01:42:57.953897   55685 system_pods.go:61] "coredns-7db6d8ff4d-4q8q7" [d8103677-9dcf-4aef-9581-e0ddec7a1aaa] Running
	I0717 01:42:57.953904   55685 system_pods.go:61] "etcd-auto-894370" [65f9b263-0e6b-4201-abe2-b504b5712588] Running
	I0717 01:42:57.953909   55685 system_pods.go:61] "kube-apiserver-auto-894370" [2d92bc2d-4d53-47da-8863-3ed8036b1185] Running
	I0717 01:42:57.953914   55685 system_pods.go:61] "kube-controller-manager-auto-894370" [f206f4a3-c3ed-4977-9c4c-ae47953feb20] Running
	I0717 01:42:57.953922   55685 system_pods.go:61] "kube-proxy-lq55v" [8b029947-4c40-4479-86bb-fd4b4ea01d08] Running
	I0717 01:42:57.953926   55685 system_pods.go:61] "kube-scheduler-auto-894370" [8743f3ef-bb58-44b2-8f1c-ff6dcf06e153] Running
	I0717 01:42:57.953931   55685 system_pods.go:61] "storage-provisioner" [b8aeb785-acb0-4c62-87e5-06d87056ff6d] Running
	I0717 01:42:57.953939   55685 system_pods.go:74] duration metric: took 169.376299ms to wait for pod list to return data ...
	I0717 01:42:57.953949   55685 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:42:58.149482   55685 default_sa.go:45] found service account: "default"
	I0717 01:42:58.149519   55685 default_sa.go:55] duration metric: took 195.561535ms for default service account to be created ...
	I0717 01:42:58.149530   55685 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:42:58.353251   55685 system_pods.go:86] 7 kube-system pods found
	I0717 01:42:58.353281   55685 system_pods.go:89] "coredns-7db6d8ff4d-4q8q7" [d8103677-9dcf-4aef-9581-e0ddec7a1aaa] Running
	I0717 01:42:58.353290   55685 system_pods.go:89] "etcd-auto-894370" [65f9b263-0e6b-4201-abe2-b504b5712588] Running
	I0717 01:42:58.353296   55685 system_pods.go:89] "kube-apiserver-auto-894370" [2d92bc2d-4d53-47da-8863-3ed8036b1185] Running
	I0717 01:42:58.353303   55685 system_pods.go:89] "kube-controller-manager-auto-894370" [f206f4a3-c3ed-4977-9c4c-ae47953feb20] Running
	I0717 01:42:58.353309   55685 system_pods.go:89] "kube-proxy-lq55v" [8b029947-4c40-4479-86bb-fd4b4ea01d08] Running
	I0717 01:42:58.353316   55685 system_pods.go:89] "kube-scheduler-auto-894370" [8743f3ef-bb58-44b2-8f1c-ff6dcf06e153] Running
	I0717 01:42:58.353321   55685 system_pods.go:89] "storage-provisioner" [b8aeb785-acb0-4c62-87e5-06d87056ff6d] Running
	I0717 01:42:58.353351   55685 system_pods.go:126] duration metric: took 203.8148ms to wait for k8s-apps to be running ...
	I0717 01:42:58.353364   55685 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:42:58.353448   55685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:42:58.391912   55685 system_svc.go:56] duration metric: took 38.53861ms WaitForService to wait for kubelet
	I0717 01:42:58.391944   55685 kubeadm.go:582] duration metric: took 43.291190101s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:42:58.391969   55685 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:42:58.549731   55685 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:42:58.549770   55685 node_conditions.go:123] node cpu capacity is 2
	I0717 01:42:58.549788   55685 node_conditions.go:105] duration metric: took 157.812536ms to run NodePressure ...
	I0717 01:42:58.549803   55685 start.go:241] waiting for startup goroutines ...
	I0717 01:42:58.549814   55685 start.go:246] waiting for cluster config update ...
	I0717 01:42:58.549827   55685 start.go:255] writing updated cluster config ...
	I0717 01:42:58.550194   55685 ssh_runner.go:195] Run: rm -f paused
	I0717 01:42:58.603781   55685 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:42:58.605769   55685 out.go:177] * Done! kubectl is now configured to use "auto-894370" cluster and "default" namespace by default
	I0717 01:42:57.522425   55923 node_ready.go:49] node "kindnet-894370" has status "Ready":"True"
	I0717 01:42:57.522450   55923 node_ready.go:38] duration metric: took 16.504181902s for node "kindnet-894370" to be "Ready" ...
	I0717 01:42:57.522459   55923 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:57.530462   55923 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-7kmmz" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.538931   55923 pod_ready.go:92] pod "coredns-7db6d8ff4d-7kmmz" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.538956   55923 pod_ready.go:81] duration metric: took 1.008467677s for pod "coredns-7db6d8ff4d-7kmmz" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.538967   55923 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-c8bzb" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.545369   55923 pod_ready.go:92] pod "coredns-7db6d8ff4d-c8bzb" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.545397   55923 pod_ready.go:81] duration metric: took 6.422536ms for pod "coredns-7db6d8ff4d-c8bzb" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.545411   55923 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.551414   55923 pod_ready.go:92] pod "etcd-kindnet-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.551438   55923 pod_ready.go:81] duration metric: took 6.018202ms for pod "etcd-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.551454   55923 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.556709   55923 pod_ready.go:92] pod "kube-apiserver-kindnet-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.556736   55923 pod_ready.go:81] duration metric: took 5.270094ms for pod "kube-apiserver-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.556749   55923 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.724495   55923 pod_ready.go:92] pod "kube-controller-manager-kindnet-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.724534   55923 pod_ready.go:81] duration metric: took 167.776237ms for pod "kube-controller-manager-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.724549   55923 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-xjmxc" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:59.122600   55923 pod_ready.go:92] pod "kube-proxy-xjmxc" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:59.122623   55923 pod_ready.go:81] duration metric: took 398.065163ms for pod "kube-proxy-xjmxc" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:59.122632   55923 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:59.525312   55923 pod_ready.go:92] pod "kube-scheduler-kindnet-894370" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:59.525340   55923 pod_ready.go:81] duration metric: took 402.699932ms for pod "kube-scheduler-kindnet-894370" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:59.525353   55923 pod_ready.go:38] duration metric: took 2.002883775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:42:59.525372   55923 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:42:59.525426   55923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:42:59.543608   55923 api_server.go:72] duration metric: took 19.652683934s to wait for apiserver process to appear ...
	I0717 01:42:59.543633   55923 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:42:59.543650   55923 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0717 01:42:59.550426   55923 api_server.go:279] https://192.168.61.22:8443/healthz returned 200:
	ok
	I0717 01:42:59.551994   55923 api_server.go:141] control plane version: v1.30.2
	I0717 01:42:59.552013   55923 api_server.go:131] duration metric: took 8.375011ms to wait for apiserver health ...
	I0717 01:42:59.552022   55923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:42:59.729467   55923 system_pods.go:59] 9 kube-system pods found
	I0717 01:42:59.729509   55923 system_pods.go:61] "coredns-7db6d8ff4d-7kmmz" [dcf940c7-926d-4b26-9b7c-982e26ccf4e6] Running
	I0717 01:42:59.729517   55923 system_pods.go:61] "coredns-7db6d8ff4d-c8bzb" [eea52b3a-5e60-4927-b0d7-a54e08502f75] Running
	I0717 01:42:59.729522   55923 system_pods.go:61] "etcd-kindnet-894370" [ce3e78b4-4b89-4229-9657-27b901d63eba] Running
	I0717 01:42:59.729527   55923 system_pods.go:61] "kindnet-tjrjz" [5175b71b-f875-4cd6-b743-a3b9059ac1d5] Running
	I0717 01:42:59.729531   55923 system_pods.go:61] "kube-apiserver-kindnet-894370" [4e815e32-4f86-4eea-a750-730e02035564] Running
	I0717 01:42:59.729536   55923 system_pods.go:61] "kube-controller-manager-kindnet-894370" [5874fe5e-b2bf-42dd-a961-d667ede7baca] Running
	I0717 01:42:59.729541   55923 system_pods.go:61] "kube-proxy-xjmxc" [1858afa5-0485-47c2-8850-303e206420a8] Running
	I0717 01:42:59.729551   55923 system_pods.go:61] "kube-scheduler-kindnet-894370" [e391956d-a048-4ddb-ba94-caf2b7e4277b] Running
	I0717 01:42:59.729557   55923 system_pods.go:61] "storage-provisioner" [dfa8d38a-802f-4bf1-b769-929118d399ae] Running
	I0717 01:42:59.729566   55923 system_pods.go:74] duration metric: took 177.539013ms to wait for pod list to return data ...
	I0717 01:42:59.729578   55923 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:42:59.922461   55923 default_sa.go:45] found service account: "default"
	I0717 01:42:59.922483   55923 default_sa.go:55] duration metric: took 192.895981ms for default service account to be created ...
	I0717 01:42:59.922494   55923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:43:00.126166   55923 system_pods.go:86] 9 kube-system pods found
	I0717 01:43:00.126202   55923 system_pods.go:89] "coredns-7db6d8ff4d-7kmmz" [dcf940c7-926d-4b26-9b7c-982e26ccf4e6] Running
	I0717 01:43:00.126210   55923 system_pods.go:89] "coredns-7db6d8ff4d-c8bzb" [eea52b3a-5e60-4927-b0d7-a54e08502f75] Running
	I0717 01:43:00.126216   55923 system_pods.go:89] "etcd-kindnet-894370" [ce3e78b4-4b89-4229-9657-27b901d63eba] Running
	I0717 01:43:00.126222   55923 system_pods.go:89] "kindnet-tjrjz" [5175b71b-f875-4cd6-b743-a3b9059ac1d5] Running
	I0717 01:43:00.126232   55923 system_pods.go:89] "kube-apiserver-kindnet-894370" [4e815e32-4f86-4eea-a750-730e02035564] Running
	I0717 01:43:00.126238   55923 system_pods.go:89] "kube-controller-manager-kindnet-894370" [5874fe5e-b2bf-42dd-a961-d667ede7baca] Running
	I0717 01:43:00.126244   55923 system_pods.go:89] "kube-proxy-xjmxc" [1858afa5-0485-47c2-8850-303e206420a8] Running
	I0717 01:43:00.126249   55923 system_pods.go:89] "kube-scheduler-kindnet-894370" [e391956d-a048-4ddb-ba94-caf2b7e4277b] Running
	I0717 01:43:00.126256   55923 system_pods.go:89] "storage-provisioner" [dfa8d38a-802f-4bf1-b769-929118d399ae] Running
	I0717 01:43:00.126263   55923 system_pods.go:126] duration metric: took 203.763898ms to wait for k8s-apps to be running ...
	I0717 01:43:00.126277   55923 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:43:00.126321   55923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:43:00.142767   55923 system_svc.go:56] duration metric: took 16.481266ms WaitForService to wait for kubelet
	I0717 01:43:00.142800   55923 kubeadm.go:582] duration metric: took 20.251879111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:43:00.142824   55923 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:43:00.323782   55923 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:43:00.323807   55923 node_conditions.go:123] node cpu capacity is 2
	I0717 01:43:00.323828   55923 node_conditions.go:105] duration metric: took 180.99892ms to run NodePressure ...
	I0717 01:43:00.323839   55923 start.go:241] waiting for startup goroutines ...
	I0717 01:43:00.323847   55923 start.go:246] waiting for cluster config update ...
	I0717 01:43:00.323856   55923 start.go:255] writing updated cluster config ...
	I0717 01:43:00.324071   55923 ssh_runner.go:195] Run: rm -f paused
	I0717 01:43:00.372345   55923 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:43:00.373906   55923 out.go:177] * Done! kubectl is now configured to use "kindnet-894370" cluster and "default" namespace by default
	I0717 01:42:58.601664   56553 pod_ready.go:92] pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace has status "Ready":"True"
	I0717 01:42:58.601691   56553 pod_ready.go:81] duration metric: took 8.007361038s for pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:58.601704   56553 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:00.607834   56553 pod_ready.go:102] pod "etcd-pause-056024" in "kube-system" namespace has status "Ready":"False"
	I0717 01:43:01.108852   56553 pod_ready.go:92] pod "etcd-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:01.108881   56553 pod_ready.go:81] duration metric: took 2.507168773s for pod "etcd-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.108893   56553 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.621820   56553 pod_ready.go:92] pod "kube-apiserver-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:01.621849   56553 pod_ready.go:81] duration metric: took 512.946854ms for pod "kube-apiserver-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.621859   56553 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.629480   56553 pod_ready.go:92] pod "kube-controller-manager-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:01.629502   56553 pod_ready.go:81] duration metric: took 7.636033ms for pod "kube-controller-manager-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.629514   56553 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-w9cq7" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.638809   56553 pod_ready.go:92] pod "kube-proxy-w9cq7" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:01.638833   56553 pod_ready.go:81] duration metric: took 9.311979ms for pod "kube-proxy-w9cq7" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:01.638846   56553 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:42:59.699518   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:42:59.700010   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:42:59.700043   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:42:59.699930   56761 retry.go:31] will retry after 3.537615064s: waiting for machine to come up
	I0717 01:43:02.145127   56553 pod_ready.go:92] pod "kube-scheduler-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:02.145156   56553 pod_ready.go:81] duration metric: took 506.298418ms for pod "kube-scheduler-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:02.145164   56553 pod_ready.go:38] duration metric: took 11.558668479s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:43:02.145179   56553 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:43:02.160810   56553 ops.go:34] apiserver oom_adj: -16
	I0717 01:43:02.160837   56553 kubeadm.go:597] duration metric: took 19.202268942s to restartPrimaryControlPlane
	I0717 01:43:02.160848   56553 kubeadm.go:394] duration metric: took 19.291514343s to StartCluster
	I0717 01:43:02.160867   56553 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:02.160947   56553 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:43:02.162890   56553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:43:02.163179   56553 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:43:02.163837   56553 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:43:02.164219   56553 config.go:182] Loaded profile config "pause-056024": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:43:02.165982   56553 out.go:177] * Enabled addons: 
	I0717 01:43:02.165982   56553 out.go:177] * Verifying Kubernetes components...
	I0717 01:43:02.167318   56553 addons.go:510] duration metric: took 3.481789ms for enable addons: enabled=[]
	I0717 01:43:02.167357   56553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:43:02.348726   56553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:43:02.367476   56553 node_ready.go:35] waiting up to 6m0s for node "pause-056024" to be "Ready" ...
	I0717 01:43:02.370773   56553 node_ready.go:49] node "pause-056024" has status "Ready":"True"
	I0717 01:43:02.370791   56553 node_ready.go:38] duration metric: took 3.28127ms for node "pause-056024" to be "Ready" ...
	I0717 01:43:02.370799   56553 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:43:02.375827   56553 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:02.705649   56553 pod_ready.go:92] pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:02.705672   56553 pod_ready.go:81] duration metric: took 329.824993ms for pod "coredns-7db6d8ff4d-gkx7k" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:02.705682   56553 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.106322   56553 pod_ready.go:92] pod "etcd-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:03.106354   56553 pod_ready.go:81] duration metric: took 400.663883ms for pod "etcd-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.106367   56553 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.505690   56553 pod_ready.go:92] pod "kube-apiserver-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:03.505718   56553 pod_ready.go:81] duration metric: took 399.342172ms for pod "kube-apiserver-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.505739   56553 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.906103   56553 pod_ready.go:92] pod "kube-controller-manager-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:03.906130   56553 pod_ready.go:81] duration metric: took 400.383398ms for pod "kube-controller-manager-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:03.906142   56553 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w9cq7" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:04.305471   56553 pod_ready.go:92] pod "kube-proxy-w9cq7" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:04.305498   56553 pod_ready.go:81] duration metric: took 399.349486ms for pod "kube-proxy-w9cq7" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:04.305510   56553 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:04.706208   56553 pod_ready.go:92] pod "kube-scheduler-pause-056024" in "kube-system" namespace has status "Ready":"True"
	I0717 01:43:04.706232   56553 pod_ready.go:81] duration metric: took 400.713751ms for pod "kube-scheduler-pause-056024" in "kube-system" namespace to be "Ready" ...
	I0717 01:43:04.706243   56553 pod_ready.go:38] duration metric: took 2.335435085s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:43:04.706259   56553 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:43:04.706312   56553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:43:04.733707   56553 api_server.go:72] duration metric: took 2.57049832s to wait for apiserver process to appear ...
	I0717 01:43:04.733732   56553 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:43:04.733748   56553 api_server.go:253] Checking apiserver healthz at https://192.168.39.97:8443/healthz ...
	I0717 01:43:04.738010   56553 api_server.go:279] https://192.168.39.97:8443/healthz returned 200:
	ok
	I0717 01:43:04.738872   56553 api_server.go:141] control plane version: v1.30.2
	I0717 01:43:04.738892   56553 api_server.go:131] duration metric: took 5.153768ms to wait for apiserver health ...
	I0717 01:43:04.738901   56553 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:43:04.907925   56553 system_pods.go:59] 6 kube-system pods found
	I0717 01:43:04.907952   56553 system_pods.go:61] "coredns-7db6d8ff4d-gkx7k" [f2471beb-346c-4784-a3c5-a5ecc6f8e8a6] Running
	I0717 01:43:04.907957   56553 system_pods.go:61] "etcd-pause-056024" [f2c958e6-5ec8-4981-bf04-604483fcde3f] Running
	I0717 01:43:04.907960   56553 system_pods.go:61] "kube-apiserver-pause-056024" [fa9af1c1-de22-4ee5-9829-94cc31bd33f3] Running
	I0717 01:43:04.907966   56553 system_pods.go:61] "kube-controller-manager-pause-056024" [f8aabc0f-39bf-4e32-a58d-4ba9c97108a4] Running
	I0717 01:43:04.907969   56553 system_pods.go:61] "kube-proxy-w9cq7" [f908608a-f77f-4653-86a3-1b535c9c6973] Running
	I0717 01:43:04.907972   56553 system_pods.go:61] "kube-scheduler-pause-056024" [386119a0-65b2-435a-bca3-0d3b198466d9] Running
	I0717 01:43:04.907977   56553 system_pods.go:74] duration metric: took 169.07147ms to wait for pod list to return data ...
	I0717 01:43:04.907985   56553 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:43:05.105641   56553 default_sa.go:45] found service account: "default"
	I0717 01:43:05.105668   56553 default_sa.go:55] duration metric: took 197.678061ms for default service account to be created ...
	I0717 01:43:05.105678   56553 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:43:05.307575   56553 system_pods.go:86] 6 kube-system pods found
	I0717 01:43:05.307606   56553 system_pods.go:89] "coredns-7db6d8ff4d-gkx7k" [f2471beb-346c-4784-a3c5-a5ecc6f8e8a6] Running
	I0717 01:43:05.307614   56553 system_pods.go:89] "etcd-pause-056024" [f2c958e6-5ec8-4981-bf04-604483fcde3f] Running
	I0717 01:43:05.307621   56553 system_pods.go:89] "kube-apiserver-pause-056024" [fa9af1c1-de22-4ee5-9829-94cc31bd33f3] Running
	I0717 01:43:05.307628   56553 system_pods.go:89] "kube-controller-manager-pause-056024" [f8aabc0f-39bf-4e32-a58d-4ba9c97108a4] Running
	I0717 01:43:05.307633   56553 system_pods.go:89] "kube-proxy-w9cq7" [f908608a-f77f-4653-86a3-1b535c9c6973] Running
	I0717 01:43:05.307638   56553 system_pods.go:89] "kube-scheduler-pause-056024" [386119a0-65b2-435a-bca3-0d3b198466d9] Running
	I0717 01:43:05.307647   56553 system_pods.go:126] duration metric: took 201.962051ms to wait for k8s-apps to be running ...
	I0717 01:43:05.307655   56553 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:43:05.307705   56553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:43:05.323078   56553 system_svc.go:56] duration metric: took 15.415444ms WaitForService to wait for kubelet
	I0717 01:43:05.323112   56553 kubeadm.go:582] duration metric: took 3.159906333s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:43:05.323134   56553 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:43:05.506823   56553 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:43:05.506852   56553 node_conditions.go:123] node cpu capacity is 2
	I0717 01:43:05.506863   56553 node_conditions.go:105] duration metric: took 183.723935ms to run NodePressure ...
	I0717 01:43:05.506876   56553 start.go:241] waiting for startup goroutines ...
	I0717 01:43:05.506885   56553 start.go:246] waiting for cluster config update ...
	I0717 01:43:05.506896   56553 start.go:255] writing updated cluster config ...
	I0717 01:43:05.507190   56553 ssh_runner.go:195] Run: rm -f paused
	I0717 01:43:05.555390   56553 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:43:05.557471   56553 out.go:177] * Done! kubectl is now configured to use "pause-056024" cluster and "default" namespace by default
	I0717 01:43:03.239458   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:03.239922   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | unable to find current IP address of domain kubernetes-upgrade-572332 in network mk-kubernetes-upgrade-572332
	I0717 01:43:03.239954   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | I0717 01:43:03.239876   56761 retry.go:31] will retry after 2.883256643s: waiting for machine to come up
	I0717 01:43:06.126841   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.127396   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Found IP for machine: 192.168.72.73
	I0717 01:43:06.127422   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Reserving static IP address...
	I0717 01:43:06.127440   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has current primary IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.127903   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Reserved static IP address: 192.168.72.73
	I0717 01:43:06.127961   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-572332", mac: "52:54:00:e2:36:51", ip: "192.168.72.73"} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:06.127982   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Waiting for SSH to be available...
	I0717 01:43:06.128010   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | skip adding static IP to network mk-kubernetes-upgrade-572332 - found existing host DHCP lease matching {name: "kubernetes-upgrade-572332", mac: "52:54:00:e2:36:51", ip: "192.168.72.73"}
	I0717 01:43:06.128034   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Getting to WaitForSSH function...
	I0717 01:43:06.130345   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.130740   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:06.130786   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.130947   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Using SSH client type: external
	I0717 01:43:06.130986   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa (-rw-------)
	I0717 01:43:06.131019   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:43:06.131035   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | About to run SSH command:
	I0717 01:43:06.131047   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | exit 0
	I0717 01:43:06.263942   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | SSH cmd err, output: <nil>: 
	I0717 01:43:06.264383   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetConfigRaw
	I0717 01:43:06.265031   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetIP
	I0717 01:43:06.267578   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.267985   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:06.268028   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.268265   56726 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kubernetes-upgrade-572332/config.json ...
	I0717 01:43:06.268513   56726 machine.go:94] provisionDockerMachine start ...
	I0717 01:43:06.268542   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .DriverName
	I0717 01:43:06.268777   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:06.270880   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.271225   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:06.271251   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.271413   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:06.271608   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:06.271775   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:06.271938   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:06.272114   56726 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:06.272351   56726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:43:06.272367   56726 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:43:06.391375   56726 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:43:06.391406   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetMachineName
	I0717 01:43:06.391674   56726 buildroot.go:166] provisioning hostname "kubernetes-upgrade-572332"
	I0717 01:43:06.391703   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetMachineName
	I0717 01:43:06.391979   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:06.395459   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.395951   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:06.395994   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.396200   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:06.396396   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:06.396606   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:06.396784   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:06.396967   56726 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:06.397228   56726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:43:06.397247   56726 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-572332 && echo "kubernetes-upgrade-572332" | sudo tee /etc/hostname
	I0717 01:43:06.538412   56726 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-572332
	
	I0717 01:43:06.538454   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:06.540980   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.541356   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:06.541390   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.541525   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:06.541703   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:06.541856   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:06.541996   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:06.542132   56726 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:06.542334   56726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:43:06.542352   56726 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-572332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-572332/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-572332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:43:06.675879   56726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:43:06.675915   56726 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:43:06.675948   56726 buildroot.go:174] setting up certificates
	I0717 01:43:06.675960   56726 provision.go:84] configureAuth start
	I0717 01:43:06.675981   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetMachineName
	I0717 01:43:06.676946   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetIP
	I0717 01:43:06.680369   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.680455   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:06.680470   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.680754   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:06.683220   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.683567   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:06.683605   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.683680   56726 provision.go:143] copyHostCerts
	I0717 01:43:06.683745   56726 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:43:06.683758   56726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:43:06.683815   56726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:43:06.683881   56726 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:43:06.683889   56726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:43:06.683910   56726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:43:06.683995   56726 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:43:06.684005   56726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:43:06.684026   56726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:43:06.684069   56726 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-572332 san=[127.0.0.1 192.168.72.73 kubernetes-upgrade-572332 localhost minikube]
	I0717 01:43:06.751807   56726 provision.go:177] copyRemoteCerts
	I0717 01:43:06.751879   56726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:43:06.751927   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:06.755524   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.755972   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:06.756011   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.756224   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:06.756452   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:06.756628   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:06.756760   56726 sshutil.go:53] new ssh client: &{IP:192.168.72.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/kubernetes-upgrade-572332/id_rsa Username:docker}
	I0717 01:43:06.851648   56726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:43:06.883348   56726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 01:43:06.913994   56726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:43:06.940546   56726 provision.go:87] duration metric: took 264.564527ms to configureAuth
	I0717 01:43:06.940579   56726 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:43:06.940812   56726 config.go:182] Loaded profile config "kubernetes-upgrade-572332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:43:06.940902   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHHostname
	I0717 01:43:06.943764   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.944132   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:36:51", ip: ""} in network mk-kubernetes-upgrade-572332: {Iface:virbr2 ExpiryTime:2024-07-17 02:42:58 +0000 UTC Type:0 Mac:52:54:00:e2:36:51 Iaid: IPaddr:192.168.72.73 Prefix:24 Hostname:kubernetes-upgrade-572332 Clientid:01:52:54:00:e2:36:51}
	I0717 01:43:06.944161   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) DBG | domain kubernetes-upgrade-572332 has defined IP address 192.168.72.73 and MAC address 52:54:00:e2:36:51 in network mk-kubernetes-upgrade-572332
	I0717 01:43:06.944332   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHPort
	I0717 01:43:06.944534   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:06.944736   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHKeyPath
	I0717 01:43:06.944904   56726 main.go:141] libmachine: (kubernetes-upgrade-572332) Calling .GetSSHUsername
	I0717 01:43:06.945124   56726 main.go:141] libmachine: Using SSH client type: native
	I0717 01:43:06.945317   56726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.73 22 <nil> <nil>}
	I0717 01:43:06.945335   56726 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.312018550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180588311990994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32830ab0-539b-4b37-b517-f24bf568bdf7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.312677421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed9df0e9-c417-4b98-9188-431dbd930475 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.312790850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed9df0e9-c417-4b98-9188-431dbd930475 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.313378666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4f99740c21e53f87133bff2d52a0bd56c2a02626274b365bdaae0e12964a4e4,PodSandboxId:ad6e690168a8d0e7a28f39b6e9d0f6483d0e265ba094592206e447b3e5a0540d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180569898448501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76da7a10c63d2b3b3c03dbacafc0f9da88957e40b9d46e154cfebe79a45d8778,PodSandboxId:03e43a98177260e7689e310bd3db81fa7f9d28db9aedad14a8aafbd4729bf0e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721180569866736257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d49a1cb5b4ebc6a4f84beed352723268474608d2654cc6a59ef5325cda633f7,PodSandboxId:d202bf4689f3162765115a2531c5d943877ad7a0d48ae2a5f92d01e57371af1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721180565113247261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d04
72f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e870955bc1a988bf2a6a9f3a25805dfcfe58bc0abbeced196cfaf88a009195c,PodSandboxId:5a1e554720c05a07354ed18e5f5680e74fcdd66267bb05f5e14168e02631e194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180565077063086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 496b12d8541f498238ec070fdd540
8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8fd0610adffbaef05d966acf6cd7a87404f8b707eacd64e50b760417dd998,PodSandboxId:40a6cc729d8bbfcbb28535064584be1c17945b9b734f2321bd9892f5838f7f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180565045618313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernete
s.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bf2c3a47315825765e5622511b58b5ffa48c0f1c59003f1d67537e3fe66b9,PodSandboxId:c62cef6afd1e5c65fddc44ee1f9fa1f2de4af1bb56a60b9db5e8dc9dc0b739a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180565058599014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984,PodSandboxId:7ab5efb667a491f7aa8ded2cb2dbe76052536a0e23b72fb6b61fab371d1065cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180560461415952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc
453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9,PodSandboxId:31a5f1177affc0509ff51333a1dd6cfbc65ed9d9d3443eddd07922527abeef2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721180559262865079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42,PodSandboxId:2a701dfd17e7641a4b5b7bd95e07e38bf6d31f407159c4665459c3ca104bbe9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721180559253998870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d0472f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140,PodSandboxId:c0506fa9493537d49342fd9bc2f31bdcd78ea0b8d65e382e841d4e9f4466f2fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721180559209983237,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernetes.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17,PodSandboxId:e97f6874a2bb5477f6b7af4136ab489ede4834bf078507ac6e777d5e6f73c9a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721180559158317606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50,PodSandboxId:fdb935b244396e518b422332eece88c3a90198cb9dec6292365035d86bf00213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721180559078822574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 496b12d8541f498238ec070fdd5408cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed9df0e9-c417-4b98-9188-431dbd930475 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.368248697Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a24502cb-7050-42b7-9514-64fed76fe0ee name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.368373682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a24502cb-7050-42b7-9514-64fed76fe0ee name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.376922622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7553c75-da8d-433b-86a9-aa69c3adfb48 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.377771951Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180588377734633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7553c75-da8d-433b-86a9-aa69c3adfb48 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.379475117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=185e023d-685e-4c3f-b537-3f9bfc2dff1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.379691068Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=185e023d-685e-4c3f-b537-3f9bfc2dff1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.380079499Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4f99740c21e53f87133bff2d52a0bd56c2a02626274b365bdaae0e12964a4e4,PodSandboxId:ad6e690168a8d0e7a28f39b6e9d0f6483d0e265ba094592206e447b3e5a0540d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180569898448501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76da7a10c63d2b3b3c03dbacafc0f9da88957e40b9d46e154cfebe79a45d8778,PodSandboxId:03e43a98177260e7689e310bd3db81fa7f9d28db9aedad14a8aafbd4729bf0e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721180569866736257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d49a1cb5b4ebc6a4f84beed352723268474608d2654cc6a59ef5325cda633f7,PodSandboxId:d202bf4689f3162765115a2531c5d943877ad7a0d48ae2a5f92d01e57371af1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721180565113247261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d04
72f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e870955bc1a988bf2a6a9f3a25805dfcfe58bc0abbeced196cfaf88a009195c,PodSandboxId:5a1e554720c05a07354ed18e5f5680e74fcdd66267bb05f5e14168e02631e194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180565077063086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 496b12d8541f498238ec070fdd540
8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8fd0610adffbaef05d966acf6cd7a87404f8b707eacd64e50b760417dd998,PodSandboxId:40a6cc729d8bbfcbb28535064584be1c17945b9b734f2321bd9892f5838f7f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180565045618313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernete
s.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bf2c3a47315825765e5622511b58b5ffa48c0f1c59003f1d67537e3fe66b9,PodSandboxId:c62cef6afd1e5c65fddc44ee1f9fa1f2de4af1bb56a60b9db5e8dc9dc0b739a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180565058599014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984,PodSandboxId:7ab5efb667a491f7aa8ded2cb2dbe76052536a0e23b72fb6b61fab371d1065cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180560461415952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc
453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9,PodSandboxId:31a5f1177affc0509ff51333a1dd6cfbc65ed9d9d3443eddd07922527abeef2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721180559262865079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42,PodSandboxId:2a701dfd17e7641a4b5b7bd95e07e38bf6d31f407159c4665459c3ca104bbe9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721180559253998870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d0472f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140,PodSandboxId:c0506fa9493537d49342fd9bc2f31bdcd78ea0b8d65e382e841d4e9f4466f2fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721180559209983237,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernetes.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17,PodSandboxId:e97f6874a2bb5477f6b7af4136ab489ede4834bf078507ac6e777d5e6f73c9a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721180559158317606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50,PodSandboxId:fdb935b244396e518b422332eece88c3a90198cb9dec6292365035d86bf00213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721180559078822574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 496b12d8541f498238ec070fdd5408cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=185e023d-685e-4c3f-b537-3f9bfc2dff1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.426689430Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15a607d7-4dac-4c7d-884d-2d529c70383e name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.426947537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15a607d7-4dac-4c7d-884d-2d529c70383e name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.428013728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e70a4aa-eec6-41a9-9c1a-be05d614f15f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.428490544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180588428466253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e70a4aa-eec6-41a9-9c1a-be05d614f15f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.428957237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49258a90-2ccc-4b2b-8f49-4d13ca24e33e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.429029758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49258a90-2ccc-4b2b-8f49-4d13ca24e33e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.429343593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4f99740c21e53f87133bff2d52a0bd56c2a02626274b365bdaae0e12964a4e4,PodSandboxId:ad6e690168a8d0e7a28f39b6e9d0f6483d0e265ba094592206e447b3e5a0540d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180569898448501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76da7a10c63d2b3b3c03dbacafc0f9da88957e40b9d46e154cfebe79a45d8778,PodSandboxId:03e43a98177260e7689e310bd3db81fa7f9d28db9aedad14a8aafbd4729bf0e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721180569866736257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d49a1cb5b4ebc6a4f84beed352723268474608d2654cc6a59ef5325cda633f7,PodSandboxId:d202bf4689f3162765115a2531c5d943877ad7a0d48ae2a5f92d01e57371af1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721180565113247261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d04
72f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e870955bc1a988bf2a6a9f3a25805dfcfe58bc0abbeced196cfaf88a009195c,PodSandboxId:5a1e554720c05a07354ed18e5f5680e74fcdd66267bb05f5e14168e02631e194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180565077063086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 496b12d8541f498238ec070fdd540
8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8fd0610adffbaef05d966acf6cd7a87404f8b707eacd64e50b760417dd998,PodSandboxId:40a6cc729d8bbfcbb28535064584be1c17945b9b734f2321bd9892f5838f7f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180565045618313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernete
s.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bf2c3a47315825765e5622511b58b5ffa48c0f1c59003f1d67537e3fe66b9,PodSandboxId:c62cef6afd1e5c65fddc44ee1f9fa1f2de4af1bb56a60b9db5e8dc9dc0b739a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180565058599014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984,PodSandboxId:7ab5efb667a491f7aa8ded2cb2dbe76052536a0e23b72fb6b61fab371d1065cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180560461415952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc
453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9,PodSandboxId:31a5f1177affc0509ff51333a1dd6cfbc65ed9d9d3443eddd07922527abeef2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721180559262865079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42,PodSandboxId:2a701dfd17e7641a4b5b7bd95e07e38bf6d31f407159c4665459c3ca104bbe9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721180559253998870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d0472f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140,PodSandboxId:c0506fa9493537d49342fd9bc2f31bdcd78ea0b8d65e382e841d4e9f4466f2fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721180559209983237,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernetes.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17,PodSandboxId:e97f6874a2bb5477f6b7af4136ab489ede4834bf078507ac6e777d5e6f73c9a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721180559158317606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50,PodSandboxId:fdb935b244396e518b422332eece88c3a90198cb9dec6292365035d86bf00213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721180559078822574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 496b12d8541f498238ec070fdd5408cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49258a90-2ccc-4b2b-8f49-4d13ca24e33e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.475637376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36df269b-6f01-43b9-afe5-51c115958064 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.475751177Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36df269b-6f01-43b9-afe5-51c115958064 name=/runtime.v1.RuntimeService/Version
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.477471874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9771b050-c4e9-47a6-914d-520db8aed44d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.477992487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721180588477961562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9771b050-c4e9-47a6-914d-520db8aed44d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.479541302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ed4e56a-552d-4c84-bccb-315b1ce92e47 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.479773453Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ed4e56a-552d-4c84-bccb-315b1ce92e47 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 01:43:08 pause-056024 crio[2971]: time="2024-07-17 01:43:08.480130324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4f99740c21e53f87133bff2d52a0bd56c2a02626274b365bdaae0e12964a4e4,PodSandboxId:ad6e690168a8d0e7a28f39b6e9d0f6483d0e265ba094592206e447b3e5a0540d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721180569898448501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76da7a10c63d2b3b3c03dbacafc0f9da88957e40b9d46e154cfebe79a45d8778,PodSandboxId:03e43a98177260e7689e310bd3db81fa7f9d28db9aedad14a8aafbd4729bf0e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721180569866736257,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d49a1cb5b4ebc6a4f84beed352723268474608d2654cc6a59ef5325cda633f7,PodSandboxId:d202bf4689f3162765115a2531c5d943877ad7a0d48ae2a5f92d01e57371af1f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721180565113247261,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d04
72f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e870955bc1a988bf2a6a9f3a25805dfcfe58bc0abbeced196cfaf88a009195c,PodSandboxId:5a1e554720c05a07354ed18e5f5680e74fcdd66267bb05f5e14168e02631e194,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721180565077063086,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 496b12d8541f498238ec070fdd540
8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f8fd0610adffbaef05d966acf6cd7a87404f8b707eacd64e50b760417dd998,PodSandboxId:40a6cc729d8bbfcbb28535064584be1c17945b9b734f2321bd9892f5838f7f27,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721180565045618313,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernete
s.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:796bf2c3a47315825765e5622511b58b5ffa48c0f1c59003f1d67537e3fe66b9,PodSandboxId:c62cef6afd1e5c65fddc44ee1f9fa1f2de4af1bb56a60b9db5e8dc9dc0b739a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721180565058599014,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.
kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984,PodSandboxId:7ab5efb667a491f7aa8ded2cb2dbe76052536a0e23b72fb6b61fab371d1065cb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721180560461415952,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gkx7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2471beb-346c-4784-a3c5-a5ecc6f8e8a6,},Annotations:map[string]string{io.kubernetes.container.hash: 368cc
453,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9,PodSandboxId:31a5f1177affc0509ff51333a1dd6cfbc65ed9d9d3443eddd07922527abeef2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721180559262865079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-w9cq7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f908608a-f77f-4653-86a3-1b535c9c6973,},Annotations:map[string]string{io.kubernetes.container.hash: 93a510eb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42,PodSandboxId:2a701dfd17e7641a4b5b7bd95e07e38bf6d31f407159c4665459c3ca104bbe9f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721180559253998870,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-paus
e-056024,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6243d24d0472f8e244e25b457792cc43,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140,PodSandboxId:c0506fa9493537d49342fd9bc2f31bdcd78ea0b8d65e382e841d4e9f4466f2fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721180559209983237,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-056024,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: c83373b93ee4c933a9ca2b2b2e77367b,},Annotations:map[string]string{io.kubernetes.container.hash: 777f8d0d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17,PodSandboxId:e97f6874a2bb5477f6b7af4136ab489ede4834bf078507ac6e777d5e6f73c9a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721180559158317606,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-056024,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: bac7706a3d4d40242d264e09577770c8,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50,PodSandboxId:fdb935b244396e518b422332eece88c3a90198cb9dec6292365035d86bf00213,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721180559078822574,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-056024,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 496b12d8541f498238ec070fdd5408cf,},Annotations:map[string]string{io.kubernetes.container.hash: 315538b2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ed4e56a-552d-4c84-bccb-315b1ce92e47 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e4f99740c21e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago      Running             coredns                   2                   ad6e690168a8d       coredns-7db6d8ff4d-gkx7k
	76da7a10c63d2       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   18 seconds ago      Running             kube-proxy                2                   03e43a9817726       kube-proxy-w9cq7
	4d49a1cb5b4eb       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   23 seconds ago      Running             kube-scheduler            2                   d202bf4689f31       kube-scheduler-pause-056024
	8e870955bc1a9       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   23 seconds ago      Running             kube-apiserver            2                   5a1e554720c05       kube-apiserver-pause-056024
	796bf2c3a4731       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   23 seconds ago      Running             kube-controller-manager   2                   c62cef6afd1e5       kube-controller-manager-pause-056024
	24f8fd0610adf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      2                   40a6cc729d8bb       etcd-pause-056024
	6a13e07f93ced       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   28 seconds ago      Exited              coredns                   1                   7ab5efb667a49       coredns-7db6d8ff4d-gkx7k
	27a9da40fbea1       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   29 seconds ago      Exited              kube-proxy                1                   31a5f1177affc       kube-proxy-w9cq7
	a70a0091dd4e2       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   29 seconds ago      Exited              kube-scheduler            1                   2a701dfd17e76       kube-scheduler-pause-056024
	cbeb675206edf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   29 seconds ago      Exited              etcd                      1                   c0506fa949353       etcd-pause-056024
	2c21cec71b2cc       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   29 seconds ago      Exited              kube-controller-manager   1                   e97f6874a2bb5       kube-controller-manager-pause-056024
	b87f6da726d79       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   29 seconds ago      Exited              kube-apiserver            1                   fdb935b244396       kube-apiserver-pause-056024
	
	
	==> coredns [6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984] <==
	
	
	==> coredns [e4f99740c21e53f87133bff2d52a0bd56c2a02626274b365bdaae0e12964a4e4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49518 - 57538 "HINFO IN 4376202363426571550.341898290947266577. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009747901s
	
	
	==> describe nodes <==
	Name:               pause-056024
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-056024
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=pause-056024
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_41_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:41:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-056024
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 01:42:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 01:42:48 +0000   Wed, 17 Jul 2024 01:41:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 01:42:48 +0000   Wed, 17 Jul 2024 01:41:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 01:42:48 +0000   Wed, 17 Jul 2024 01:41:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 01:42:48 +0000   Wed, 17 Jul 2024 01:41:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    pause-056024
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 6573976077fc4482a978cc8d60479bde
	  System UUID:                65739760-77fc-4482-a978-cc8d60479bde
	  Boot ID:                    a8c6bc98-f631-4c6c-8d3d-8514a725b1b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-gkx7k                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     78s
	  kube-system                 etcd-pause-056024                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-pause-056024             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-pause-056024    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-w9cq7                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-pause-056024             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node pause-056024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node pause-056024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x7 over 98s)  kubelet          Node pause-056024 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node pause-056024 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node pause-056024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     92s                kubelet          Node pause-056024 status is now: NodeHasSufficientPID
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeReady                91s                kubelet          Node pause-056024 status is now: NodeReady
	  Normal  RegisteredNode           80s                node-controller  Node pause-056024 event: Registered Node pause-056024 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-056024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-056024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-056024 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-056024 event: Registered Node pause-056024 in Controller
	
	
	==> dmesg <==
	[  +9.176616] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.124725] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.183401] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.123778] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.265696] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.341546] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.057643] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.394110] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.613823] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.939539] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.086139] kauditd_printk_skb: 37 callbacks suppressed
	[ +13.839790] systemd-fstab-generator[1501]: Ignoring "noauto" option for root device
	[  +0.163494] kauditd_printk_skb: 21 callbacks suppressed
	[Jul17 01:42] kauditd_printk_skb: 89 callbacks suppressed
	[ +37.143079] systemd-fstab-generator[2746]: Ignoring "noauto" option for root device
	[  +0.293113] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.257647] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.228572] systemd-fstab-generator[2861]: Ignoring "noauto" option for root device
	[  +0.481548] systemd-fstab-generator[2953]: Ignoring "noauto" option for root device
	[  +1.060007] systemd-fstab-generator[3223]: Ignoring "noauto" option for root device
	[  +2.414185] systemd-fstab-generator[3661]: Ignoring "noauto" option for root device
	[  +0.101234] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.550985] kauditd_printk_skb: 38 callbacks suppressed
	[Jul17 01:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.236372] systemd-fstab-generator[4101]: Ignoring "noauto" option for root device
	
	
	==> etcd [24f8fd0610adffbaef05d966acf6cd7a87404f8b707eacd64e50b760417dd998] <==
	{"level":"info","ts":"2024-07-17T01:42:45.540786Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:42:45.540816Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:42:45.541067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=(17735085251460689206)"}
	{"level":"info","ts":"2024-07-17T01:42:45.541151Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","added-peer-id":"f61fae125a956d36","added-peer-peer-urls":["https://192.168.39.97:2380"]}
	{"level":"info","ts":"2024-07-17T01:42:45.545388Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:42:45.545447Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:42:45.564148Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:42:45.566839Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:42:45.566323Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-07-17T01:42:45.569235Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-07-17T01:42:45.569269Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:42:47.379871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:42:47.379951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:42:47.379987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgPreVoteResp from f61fae125a956d36 at term 2"}
	{"level":"info","ts":"2024-07-17T01:42:47.380002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:42:47.38001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgVoteResp from f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2024-07-17T01:42:47.380022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:42:47.380032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f61fae125a956d36 elected leader f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2024-07-17T01:42:47.385452Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f61fae125a956d36","local-member-attributes":"{Name:pause-056024 ClientURLs:[https://192.168.39.97:2379]}","request-path":"/0/members/f61fae125a956d36/attributes","cluster-id":"6e56e32a1e97f390","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:42:47.38546Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:42:47.385884Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:42:47.385938Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:42:47.385975Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:42:47.389021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:42:47.393516Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.97:2379"}
	
	
	==> etcd [cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140] <==
	{"level":"info","ts":"2024-07-17T01:42:39.915271Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"52.211959ms"}
	{"level":"info","ts":"2024-07-17T01:42:39.973836Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-17T01:42:40.045379Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","commit-index":465}
	{"level":"info","ts":"2024-07-17T01:42:40.051564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-17T01:42:40.052895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became follower at term 2"}
	{"level":"info","ts":"2024-07-17T01:42:40.053578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f61fae125a956d36 [peers: [], term: 2, commit: 465, applied: 0, lastindex: 465, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-17T01:42:40.065327Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-17T01:42:40.112868Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":442}
	{"level":"info","ts":"2024-07-17T01:42:40.1291Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-17T01:42:40.152081Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"f61fae125a956d36","timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:42:40.152764Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"f61fae125a956d36"}
	{"level":"info","ts":"2024-07-17T01:42:40.152867Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"f61fae125a956d36","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-17T01:42:40.153377Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-17T01:42:40.15369Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:42:40.169237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:42:40.169266Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T01:42:40.153881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 switched to configuration voters=(17735085251460689206)"}
	{"level":"info","ts":"2024-07-17T01:42:40.169541Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","added-peer-id":"f61fae125a956d36","added-peer-peer-urls":["https://192.168.39.97:2380"]}
	{"level":"info","ts":"2024-07-17T01:42:40.169652Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e56e32a1e97f390","local-member-id":"f61fae125a956d36","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:42:40.169679Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:42:40.228585Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:42:40.228953Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:42:40.22901Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:42:40.229091Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2024-07-17T01:42:40.229122Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.97:2380"}
	
	
	==> kernel <==
	 01:43:08 up 2 min,  0 users,  load average: 1.02, 0.35, 0.12
	Linux pause-056024 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8e870955bc1a988bf2a6a9f3a25805dfcfe58bc0abbeced196cfaf88a009195c] <==
	I0717 01:42:48.792248       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0717 01:42:48.843622       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 01:42:48.846256       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 01:42:48.847083       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 01:42:48.847132       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 01:42:48.850110       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 01:42:48.862582       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 01:42:48.892847       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 01:42:48.898153       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 01:42:48.898245       1 policy_source.go:224] refreshing policies
	I0717 01:42:48.898816       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0717 01:42:48.901045       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 01:42:48.901083       1 aggregator.go:165] initial CRD sync complete...
	I0717 01:42:48.901097       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 01:42:48.901102       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 01:42:48.901108       1 cache.go:39] Caches are synced for autoregister controller
	I0717 01:42:48.949364       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 01:42:49.747822       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 01:42:50.409822       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 01:42:50.427555       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 01:42:50.467602       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 01:42:50.509082       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 01:42:50.517443       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 01:43:01.916834       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 01:43:01.972069       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50] <==
	I0717 01:42:39.586135       1 options.go:221] external host was not specified, using 192.168.39.97
	I0717 01:42:39.588017       1 server.go:148] Version: v1.30.2
	I0717 01:42:39.588078       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17] <==
	
	
	==> kube-controller-manager [796bf2c3a47315825765e5622511b58b5ffa48c0f1c59003f1d67537e3fe66b9] <==
	I0717 01:43:01.805569       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0717 01:43:01.806623       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 01:43:01.813123       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 01:43:01.817373       1 shared_informer.go:320] Caches are synced for HPA
	I0717 01:43:01.829492       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 01:43:01.829648       1 shared_informer.go:320] Caches are synced for endpoint
	I0717 01:43:01.831530       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0717 01:43:01.832057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="158.675µs"
	I0717 01:43:01.832384       1 shared_informer.go:320] Caches are synced for job
	I0717 01:43:01.834897       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 01:43:01.836382       1 shared_informer.go:320] Caches are synced for PVC protection
	I0717 01:43:01.838632       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 01:43:01.843602       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 01:43:01.846943       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0717 01:43:01.849380       1 shared_informer.go:320] Caches are synced for GC
	I0717 01:43:01.853667       1 shared_informer.go:320] Caches are synced for disruption
	I0717 01:43:01.863389       1 shared_informer.go:320] Caches are synced for taint
	I0717 01:43:01.863864       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0717 01:43:01.863961       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-056024"
	I0717 01:43:01.864014       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0717 01:43:01.866908       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:43:01.871261       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 01:43:02.280529       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 01:43:02.280579       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 01:43:02.315118       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9] <==
	
	
	==> kube-proxy [76da7a10c63d2b3b3c03dbacafc0f9da88957e40b9d46e154cfebe79a45d8778] <==
	I0717 01:42:50.139578       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:42:50.166679       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.97"]
	I0717 01:42:50.210946       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:42:50.211113       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:42:50.211202       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:42:50.214489       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:42:50.214663       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:42:50.214699       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:42:50.216230       1 config.go:192] "Starting service config controller"
	I0717 01:42:50.216265       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:42:50.216337       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:42:50.216342       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:42:50.216803       1 config.go:319] "Starting node config controller"
	I0717 01:42:50.216835       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:42:50.317456       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:42:50.317504       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:42:50.317754       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d49a1cb5b4ebc6a4f84beed352723268474608d2654cc6a59ef5325cda633f7] <==
	I0717 01:42:46.272752       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:42:48.801589       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:42:48.801784       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:42:48.801894       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:42:48.801938       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:42:48.880981       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:42:48.881061       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:42:48.888024       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:42:48.890371       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:42:48.890460       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:42:48.890514       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:42:48.991251       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42] <==
	
	
	==> kubelet <==
	Jul 17 01:42:44 pause-056024 kubelet[3668]: I0717 01:42:44.774100    3668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bac7706a3d4d40242d264e09577770c8-flexvolume-dir\") pod \"kube-controller-manager-pause-056024\" (UID: \"bac7706a3d4d40242d264e09577770c8\") " pod="kube-system/kube-controller-manager-pause-056024"
	Jul 17 01:42:44 pause-056024 kubelet[3668]: I0717 01:42:44.774125    3668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bac7706a3d4d40242d264e09577770c8-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-056024\" (UID: \"bac7706a3d4d40242d264e09577770c8\") " pod="kube-system/kube-controller-manager-pause-056024"
	Jul 17 01:42:44 pause-056024 kubelet[3668]: I0717 01:42:44.887098    3668 kubelet_node_status.go:73] "Attempting to register node" node="pause-056024"
	Jul 17 01:42:44 pause-056024 kubelet[3668]: E0717 01:42:44.888426    3668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.97:8443: connect: connection refused" node="pause-056024"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: I0717 01:42:45.022327    3668 scope.go:117] "RemoveContainer" containerID="cbeb675206edf1dfe0c9e96d84984ed02c367ae7e20d2271fcb09a7529235140"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: I0717 01:42:45.025502    3668 scope.go:117] "RemoveContainer" containerID="b87f6da726d7902490f3a7b07b32ae2c543449beae034936faf9c3f344593c50"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: I0717 01:42:45.027944    3668 scope.go:117] "RemoveContainer" containerID="a70a0091dd4e264f12c365b9e2ba025c18219a4d8c82bdec90fad3fc5028ac42"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: I0717 01:42:45.029365    3668 scope.go:117] "RemoveContainer" containerID="2c21cec71b2cca131a814dd86ac76987274ebfefffe239dfe467aacd88359c17"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: E0717 01:42:45.171761    3668 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-056024?timeout=10s\": dial tcp 192.168.39.97:8443: connect: connection refused" interval="800ms"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: I0717 01:42:45.295635    3668 kubelet_node_status.go:73] "Attempting to register node" node="pause-056024"
	Jul 17 01:42:45 pause-056024 kubelet[3668]: E0717 01:42:45.297390    3668 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.97:8443: connect: connection refused" node="pause-056024"
	Jul 17 01:42:46 pause-056024 kubelet[3668]: I0717 01:42:46.098745    3668 kubelet_node_status.go:73] "Attempting to register node" node="pause-056024"
	Jul 17 01:42:48 pause-056024 kubelet[3668]: I0717 01:42:48.980636    3668 kubelet_node_status.go:112] "Node was previously registered" node="pause-056024"
	Jul 17 01:42:48 pause-056024 kubelet[3668]: I0717 01:42:48.981029    3668 kubelet_node_status.go:76] "Successfully registered node" node="pause-056024"
	Jul 17 01:42:48 pause-056024 kubelet[3668]: I0717 01:42:48.982771    3668 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 01:42:48 pause-056024 kubelet[3668]: I0717 01:42:48.983930    3668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.549669    3668 apiserver.go:52] "Watching apiserver"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.553218    3668 topology_manager.go:215] "Topology Admit Handler" podUID="f2471beb-346c-4784-a3c5-a5ecc6f8e8a6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gkx7k"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.553592    3668 topology_manager.go:215] "Topology Admit Handler" podUID="f908608a-f77f-4653-86a3-1b535c9c6973" podNamespace="kube-system" podName="kube-proxy-w9cq7"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.564393    3668 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.610010    3668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f908608a-f77f-4653-86a3-1b535c9c6973-xtables-lock\") pod \"kube-proxy-w9cq7\" (UID: \"f908608a-f77f-4653-86a3-1b535c9c6973\") " pod="kube-system/kube-proxy-w9cq7"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.610292    3668 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f908608a-f77f-4653-86a3-1b535c9c6973-lib-modules\") pod \"kube-proxy-w9cq7\" (UID: \"f908608a-f77f-4653-86a3-1b535c9c6973\") " pod="kube-system/kube-proxy-w9cq7"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.854405    3668 scope.go:117] "RemoveContainer" containerID="27a9da40fbea12eaae90daba03f07ebda991b7ddd0520039a163a0ebf7d79ec9"
	Jul 17 01:42:49 pause-056024 kubelet[3668]: I0717 01:42:49.854706    3668 scope.go:117] "RemoveContainer" containerID="6a13e07f93ced4226e8b9a913fee7b6c0005c42e9d5eea92e97408b95f377984"
	Jul 17 01:42:58 pause-056024 kubelet[3668]: I0717 01:42:58.312267    3668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-056024 -n pause-056024
helpers_test.go:261: (dbg) Run:  kubectl --context pause-056024 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (37.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (288.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-901761 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-901761 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m48.544886695s)

                                                
                                                
-- stdout --
	* [old-k8s-version-901761] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-901761" primary control-plane node in "old-k8s-version-901761" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:45:31.987884   64668 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:45:31.988048   64668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:45:31.988059   64668 out.go:304] Setting ErrFile to fd 2...
	I0717 01:45:31.988066   64668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:45:31.988323   64668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:45:31.989023   64668 out.go:298] Setting JSON to false
	I0717 01:45:31.990472   64668 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5274,"bootTime":1721175458,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:45:31.990542   64668 start.go:139] virtualization: kvm guest
	I0717 01:45:31.992826   64668 out.go:177] * [old-k8s-version-901761] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:45:31.994813   64668 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:45:31.994844   64668 notify.go:220] Checking for updates...
	I0717 01:45:31.998573   64668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:45:32.003607   64668 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:45:32.004925   64668 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:45:32.006349   64668 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:45:32.007807   64668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:45:32.009586   64668 config.go:182] Loaded profile config "bridge-894370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:45:32.009673   64668 config.go:182] Loaded profile config "enable-default-cni-894370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:45:32.009750   64668 config.go:182] Loaded profile config "flannel-894370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:45:32.009850   64668 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:45:32.046781   64668 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:45:32.048067   64668 start.go:297] selected driver: kvm2
	I0717 01:45:32.048093   64668 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:45:32.048108   64668 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:45:32.048901   64668 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:45:32.049013   64668 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:45:32.064313   64668 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:45:32.064373   64668 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 01:45:32.064639   64668 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:45:32.064724   64668 cni.go:84] Creating CNI manager for ""
	I0717 01:45:32.064741   64668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:45:32.064752   64668 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 01:45:32.064841   64668 start.go:340] cluster config:
	{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:45:32.064934   64668 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:45:32.066654   64668 out.go:177] * Starting "old-k8s-version-901761" primary control-plane node in "old-k8s-version-901761" cluster
	I0717 01:45:32.068044   64668 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:45:32.068083   64668 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:45:32.068106   64668 cache.go:56] Caching tarball of preloaded images
	I0717 01:45:32.068210   64668 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:45:32.068222   64668 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:45:32.068323   64668 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:45:32.068347   64668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json: {Name:mk64c2656f04d0669fd9bdba21b2d382ff9496f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:45:32.068491   64668 start.go:360] acquireMachinesLock for old-k8s-version-901761: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:45:45.455582   64668 start.go:364] duration metric: took 13.387060949s to acquireMachinesLock for "old-k8s-version-901761"
	I0717 01:45:45.455659   64668 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:45:45.455770   64668 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 01:45:45.457796   64668 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 01:45:45.458003   64668 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:45:45.458047   64668 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:45:45.474825   64668 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0717 01:45:45.475272   64668 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:45:45.475842   64668 main.go:141] libmachine: Using API Version  1
	I0717 01:45:45.475868   64668 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:45:45.476332   64668 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:45:45.476553   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:45:45.476745   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:45:45.476925   64668 start.go:159] libmachine.API.Create for "old-k8s-version-901761" (driver="kvm2")
	I0717 01:45:45.476969   64668 client.go:168] LocalClient.Create starting
	I0717 01:45:45.477013   64668 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 01:45:45.477056   64668 main.go:141] libmachine: Decoding PEM data...
	I0717 01:45:45.477078   64668 main.go:141] libmachine: Parsing certificate...
	I0717 01:45:45.477140   64668 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 01:45:45.477165   64668 main.go:141] libmachine: Decoding PEM data...
	I0717 01:45:45.477181   64668 main.go:141] libmachine: Parsing certificate...
	I0717 01:45:45.477204   64668 main.go:141] libmachine: Running pre-create checks...
	I0717 01:45:45.477216   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .PreCreateCheck
	I0717 01:45:45.477628   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetConfigRaw
	I0717 01:45:45.478101   64668 main.go:141] libmachine: Creating machine...
	I0717 01:45:45.478119   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .Create
	I0717 01:45:45.478266   64668 main.go:141] libmachine: (old-k8s-version-901761) Creating KVM machine...
	I0717 01:45:45.479399   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found existing default KVM network
	I0717 01:45:45.480655   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:45.480505   65963 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d2:d4:9d} reservation:<nil>}
	I0717 01:45:45.481768   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:45.481683   65963 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002f65b0}
	I0717 01:45:45.481803   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | created network xml: 
	I0717 01:45:45.481817   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | <network>
	I0717 01:45:45.481828   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG |   <name>mk-old-k8s-version-901761</name>
	I0717 01:45:45.481849   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG |   <dns enable='no'/>
	I0717 01:45:45.481858   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG |   
	I0717 01:45:45.481868   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0717 01:45:45.481877   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG |     <dhcp>
	I0717 01:45:45.481895   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0717 01:45:45.481909   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG |     </dhcp>
	I0717 01:45:45.481915   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG |   </ip>
	I0717 01:45:45.481921   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG |   
	I0717 01:45:45.481928   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | </network>
	I0717 01:45:45.481937   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | 
	I0717 01:45:45.488186   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | trying to create private KVM network mk-old-k8s-version-901761 192.168.50.0/24...
	I0717 01:45:45.563881   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | private KVM network mk-old-k8s-version-901761 192.168.50.0/24 created
	I0717 01:45:45.563917   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:45.563853   65963 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:45:45.563934   64668 main.go:141] libmachine: (old-k8s-version-901761) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761 ...
	I0717 01:45:45.563946   64668 main.go:141] libmachine: (old-k8s-version-901761) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 01:45:45.564039   64668 main.go:141] libmachine: (old-k8s-version-901761) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 01:45:45.837592   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:45.837494   65963 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa...
	I0717 01:45:45.913727   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:45.913596   65963 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/old-k8s-version-901761.rawdisk...
	I0717 01:45:45.913762   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Writing magic tar header
	I0717 01:45:45.913779   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Writing SSH key tar header
	I0717 01:45:45.913794   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:45.913749   65963 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761 ...
	I0717 01:45:45.913894   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761
	I0717 01:45:45.913926   64668 main.go:141] libmachine: (old-k8s-version-901761) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761 (perms=drwx------)
	I0717 01:45:45.913937   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 01:45:45.913952   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:45:45.913966   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 01:45:45.913979   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 01:45:45.913990   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Checking permissions on dir: /home/jenkins
	I0717 01:45:45.914026   64668 main.go:141] libmachine: (old-k8s-version-901761) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 01:45:45.914046   64668 main.go:141] libmachine: (old-k8s-version-901761) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 01:45:45.914060   64668 main.go:141] libmachine: (old-k8s-version-901761) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 01:45:45.914100   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Checking permissions on dir: /home
	I0717 01:45:45.914125   64668 main.go:141] libmachine: (old-k8s-version-901761) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 01:45:45.914138   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Skipping /home - not owner
	I0717 01:45:45.914156   64668 main.go:141] libmachine: (old-k8s-version-901761) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 01:45:45.914166   64668 main.go:141] libmachine: (old-k8s-version-901761) Creating domain...
	I0717 01:45:45.915180   64668 main.go:141] libmachine: (old-k8s-version-901761) define libvirt domain using xml: 
	I0717 01:45:45.915198   64668 main.go:141] libmachine: (old-k8s-version-901761) <domain type='kvm'>
	I0717 01:45:45.915209   64668 main.go:141] libmachine: (old-k8s-version-901761)   <name>old-k8s-version-901761</name>
	I0717 01:45:45.915219   64668 main.go:141] libmachine: (old-k8s-version-901761)   <memory unit='MiB'>2200</memory>
	I0717 01:45:45.915231   64668 main.go:141] libmachine: (old-k8s-version-901761)   <vcpu>2</vcpu>
	I0717 01:45:45.915240   64668 main.go:141] libmachine: (old-k8s-version-901761)   <features>
	I0717 01:45:45.915261   64668 main.go:141] libmachine: (old-k8s-version-901761)     <acpi/>
	I0717 01:45:45.915271   64668 main.go:141] libmachine: (old-k8s-version-901761)     <apic/>
	I0717 01:45:45.915306   64668 main.go:141] libmachine: (old-k8s-version-901761)     <pae/>
	I0717 01:45:45.915329   64668 main.go:141] libmachine: (old-k8s-version-901761)     
	I0717 01:45:45.915343   64668 main.go:141] libmachine: (old-k8s-version-901761)   </features>
	I0717 01:45:45.915360   64668 main.go:141] libmachine: (old-k8s-version-901761)   <cpu mode='host-passthrough'>
	I0717 01:45:45.915371   64668 main.go:141] libmachine: (old-k8s-version-901761)   
	I0717 01:45:45.915381   64668 main.go:141] libmachine: (old-k8s-version-901761)   </cpu>
	I0717 01:45:45.915403   64668 main.go:141] libmachine: (old-k8s-version-901761)   <os>
	I0717 01:45:45.915429   64668 main.go:141] libmachine: (old-k8s-version-901761)     <type>hvm</type>
	I0717 01:45:45.915440   64668 main.go:141] libmachine: (old-k8s-version-901761)     <boot dev='cdrom'/>
	I0717 01:45:45.915451   64668 main.go:141] libmachine: (old-k8s-version-901761)     <boot dev='hd'/>
	I0717 01:45:45.915464   64668 main.go:141] libmachine: (old-k8s-version-901761)     <bootmenu enable='no'/>
	I0717 01:45:45.915473   64668 main.go:141] libmachine: (old-k8s-version-901761)   </os>
	I0717 01:45:45.915482   64668 main.go:141] libmachine: (old-k8s-version-901761)   <devices>
	I0717 01:45:45.915494   64668 main.go:141] libmachine: (old-k8s-version-901761)     <disk type='file' device='cdrom'>
	I0717 01:45:45.915511   64668 main.go:141] libmachine: (old-k8s-version-901761)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/boot2docker.iso'/>
	I0717 01:45:45.915523   64668 main.go:141] libmachine: (old-k8s-version-901761)       <target dev='hdc' bus='scsi'/>
	I0717 01:45:45.915535   64668 main.go:141] libmachine: (old-k8s-version-901761)       <readonly/>
	I0717 01:45:45.915543   64668 main.go:141] libmachine: (old-k8s-version-901761)     </disk>
	I0717 01:45:45.915556   64668 main.go:141] libmachine: (old-k8s-version-901761)     <disk type='file' device='disk'>
	I0717 01:45:45.915569   64668 main.go:141] libmachine: (old-k8s-version-901761)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 01:45:45.915592   64668 main.go:141] libmachine: (old-k8s-version-901761)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/old-k8s-version-901761.rawdisk'/>
	I0717 01:45:45.915602   64668 main.go:141] libmachine: (old-k8s-version-901761)       <target dev='hda' bus='virtio'/>
	I0717 01:45:45.915613   64668 main.go:141] libmachine: (old-k8s-version-901761)     </disk>
	I0717 01:45:45.915623   64668 main.go:141] libmachine: (old-k8s-version-901761)     <interface type='network'>
	I0717 01:45:45.915636   64668 main.go:141] libmachine: (old-k8s-version-901761)       <source network='mk-old-k8s-version-901761'/>
	I0717 01:45:45.915656   64668 main.go:141] libmachine: (old-k8s-version-901761)       <model type='virtio'/>
	I0717 01:45:45.915667   64668 main.go:141] libmachine: (old-k8s-version-901761)     </interface>
	I0717 01:45:45.915679   64668 main.go:141] libmachine: (old-k8s-version-901761)     <interface type='network'>
	I0717 01:45:45.915689   64668 main.go:141] libmachine: (old-k8s-version-901761)       <source network='default'/>
	I0717 01:45:45.915699   64668 main.go:141] libmachine: (old-k8s-version-901761)       <model type='virtio'/>
	I0717 01:45:45.915707   64668 main.go:141] libmachine: (old-k8s-version-901761)     </interface>
	I0717 01:45:45.915726   64668 main.go:141] libmachine: (old-k8s-version-901761)     <serial type='pty'>
	I0717 01:45:45.915734   64668 main.go:141] libmachine: (old-k8s-version-901761)       <target port='0'/>
	I0717 01:45:45.915743   64668 main.go:141] libmachine: (old-k8s-version-901761)     </serial>
	I0717 01:45:45.915753   64668 main.go:141] libmachine: (old-k8s-version-901761)     <console type='pty'>
	I0717 01:45:45.915762   64668 main.go:141] libmachine: (old-k8s-version-901761)       <target type='serial' port='0'/>
	I0717 01:45:45.915769   64668 main.go:141] libmachine: (old-k8s-version-901761)     </console>
	I0717 01:45:45.915778   64668 main.go:141] libmachine: (old-k8s-version-901761)     <rng model='virtio'>
	I0717 01:45:45.915789   64668 main.go:141] libmachine: (old-k8s-version-901761)       <backend model='random'>/dev/random</backend>
	I0717 01:45:45.915799   64668 main.go:141] libmachine: (old-k8s-version-901761)     </rng>
	I0717 01:45:45.915809   64668 main.go:141] libmachine: (old-k8s-version-901761)     
	I0717 01:45:45.915840   64668 main.go:141] libmachine: (old-k8s-version-901761)     
	I0717 01:45:45.915863   64668 main.go:141] libmachine: (old-k8s-version-901761)   </devices>
	I0717 01:45:45.915875   64668 main.go:141] libmachine: (old-k8s-version-901761) </domain>
	I0717 01:45:45.915883   64668 main.go:141] libmachine: (old-k8s-version-901761) 
	I0717 01:45:45.919887   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:d2:71:8b in network default
	I0717 01:45:45.920393   64668 main.go:141] libmachine: (old-k8s-version-901761) Ensuring networks are active...
	I0717 01:45:45.920413   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:45.921058   64668 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network default is active
	I0717 01:45:45.921407   64668 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network mk-old-k8s-version-901761 is active
	I0717 01:45:45.921959   64668 main.go:141] libmachine: (old-k8s-version-901761) Getting domain xml...
	I0717 01:45:45.922734   64668 main.go:141] libmachine: (old-k8s-version-901761) Creating domain...
	I0717 01:45:47.338510   64668 main.go:141] libmachine: (old-k8s-version-901761) Waiting to get IP...
	I0717 01:45:47.339378   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:47.339871   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:47.339898   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:47.339849   65963 retry.go:31] will retry after 207.469711ms: waiting for machine to come up
	I0717 01:45:47.549491   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:47.550120   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:47.550148   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:47.550078   65963 retry.go:31] will retry after 299.858548ms: waiting for machine to come up
	I0717 01:45:47.851590   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:47.852098   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:47.852127   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:47.852053   65963 retry.go:31] will retry after 391.578226ms: waiting for machine to come up
	I0717 01:45:48.245844   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:48.246307   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:48.246381   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:48.246273   65963 retry.go:31] will retry after 400.282251ms: waiting for machine to come up
	I0717 01:45:48.647866   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:48.648443   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:48.648468   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:48.648358   65963 retry.go:31] will retry after 543.270331ms: waiting for machine to come up
	I0717 01:45:49.193103   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:49.193623   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:49.193646   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:49.193602   65963 retry.go:31] will retry after 744.536888ms: waiting for machine to come up
	I0717 01:45:49.939461   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:49.940012   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:49.940043   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:49.939964   65963 retry.go:31] will retry after 1.060995242s: waiting for machine to come up
	I0717 01:45:51.002677   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:51.003117   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:51.003148   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:51.003080   65963 retry.go:31] will retry after 1.040904425s: waiting for machine to come up
	I0717 01:45:52.045117   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:52.045607   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:52.045634   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:52.045558   65963 retry.go:31] will retry after 1.521915125s: waiting for machine to come up
	I0717 01:45:53.569497   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:53.570001   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:53.570023   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:53.569949   65963 retry.go:31] will retry after 1.947087749s: waiting for machine to come up
	I0717 01:45:55.518356   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:55.518855   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:55.518882   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:55.518804   65963 retry.go:31] will retry after 2.837612719s: waiting for machine to come up
	I0717 01:45:58.357591   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:45:58.358061   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:45:58.358088   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:45:58.358013   65963 retry.go:31] will retry after 2.961334628s: waiting for machine to come up
	I0717 01:46:01.320657   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:01.321211   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:46:01.321241   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:46:01.321156   65963 retry.go:31] will retry after 3.028286391s: waiting for machine to come up
	I0717 01:46:04.351613   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:04.352158   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:46:04.352192   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:46:04.352111   65963 retry.go:31] will retry after 4.818649417s: waiting for machine to come up
	I0717 01:46:09.172638   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:09.173143   64668 main.go:141] libmachine: (old-k8s-version-901761) Found IP for machine: 192.168.50.44
	I0717 01:46:09.173170   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has current primary IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:09.173179   64668 main.go:141] libmachine: (old-k8s-version-901761) Reserving static IP address...
	I0717 01:46:09.174155   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"} in network mk-old-k8s-version-901761
	I0717 01:46:09.250882   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Getting to WaitForSSH function...
	I0717 01:46:09.250930   64668 main.go:141] libmachine: (old-k8s-version-901761) Reserved static IP address: 192.168.50.44
	I0717 01:46:09.250944   64668 main.go:141] libmachine: (old-k8s-version-901761) Waiting for SSH to be available...
	I0717 01:46:09.253453   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:09.253827   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761
	I0717 01:46:09.253863   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find defined IP address of network mk-old-k8s-version-901761 interface with MAC address 52:54:00:8f:84:01
	I0717 01:46:09.254009   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH client type: external
	I0717 01:46:09.254035   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa (-rw-------)
	I0717 01:46:09.254077   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:46:09.254099   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | About to run SSH command:
	I0717 01:46:09.254112   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | exit 0
	I0717 01:46:09.258126   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | SSH cmd err, output: exit status 255: 
	I0717 01:46:09.258151   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 01:46:09.258177   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | command : exit 0
	I0717 01:46:09.258199   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | err     : exit status 255
	I0717 01:46:09.258212   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | output  : 
	I0717 01:46:12.258730   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Getting to WaitForSSH function...
	I0717 01:46:12.261512   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.261890   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:12.261920   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.262041   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH client type: external
	I0717 01:46:12.262069   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa (-rw-------)
	I0717 01:46:12.262114   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:46:12.262128   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | About to run SSH command:
	I0717 01:46:12.262158   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | exit 0
	I0717 01:46:12.387067   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | SSH cmd err, output: <nil>: 
	I0717 01:46:12.387470   64668 main.go:141] libmachine: (old-k8s-version-901761) KVM machine creation complete!
	I0717 01:46:12.387774   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetConfigRaw
	I0717 01:46:12.388400   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:46:12.388604   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:46:12.388773   64668 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 01:46:12.388787   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetState
	I0717 01:46:12.390157   64668 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 01:46:12.390174   64668 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 01:46:12.390182   64668 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 01:46:12.390191   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:12.392468   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.392807   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:12.392833   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.393023   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:46:12.393211   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:12.393368   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:12.393504   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:46:12.393700   64668 main.go:141] libmachine: Using SSH client type: native
	I0717 01:46:12.393943   64668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:46:12.393965   64668 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 01:46:12.502127   64668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:46:12.502152   64668 main.go:141] libmachine: Detecting the provisioner...
	I0717 01:46:12.502159   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:12.505207   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.505545   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:12.505572   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.505731   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:46:12.505956   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:12.506165   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:12.506303   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:46:12.506437   64668 main.go:141] libmachine: Using SSH client type: native
	I0717 01:46:12.506655   64668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:46:12.506670   64668 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 01:46:12.617004   64668 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 01:46:12.617109   64668 main.go:141] libmachine: found compatible host: buildroot
	I0717 01:46:12.617125   64668 main.go:141] libmachine: Provisioning with buildroot...
	I0717 01:46:12.617141   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:46:12.617407   64668 buildroot.go:166] provisioning hostname "old-k8s-version-901761"
	I0717 01:46:12.617433   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:46:12.617600   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:12.620152   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.620528   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:12.620560   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.620659   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:46:12.620829   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:12.620989   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:12.621109   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:46:12.621244   64668 main.go:141] libmachine: Using SSH client type: native
	I0717 01:46:12.621455   64668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:46:12.621467   64668 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-901761 && echo "old-k8s-version-901761" | sudo tee /etc/hostname
	I0717 01:46:12.751151   64668 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-901761
	
	I0717 01:46:12.751196   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:12.754165   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.754593   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:12.754623   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.754811   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:46:12.755019   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:12.755190   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:12.755351   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:46:12.755511   64668 main.go:141] libmachine: Using SSH client type: native
	I0717 01:46:12.755738   64668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:46:12.755765   64668 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-901761' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-901761/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-901761' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:46:12.885021   64668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:46:12.885053   64668 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:46:12.885107   64668 buildroot.go:174] setting up certificates
	I0717 01:46:12.885134   64668 provision.go:84] configureAuth start
	I0717 01:46:12.885154   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:46:12.885457   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:46:12.888427   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.888826   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:12.888851   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.889155   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:12.891686   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.892099   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:12.892150   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:12.892240   64668 provision.go:143] copyHostCerts
	I0717 01:46:12.892309   64668 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:46:12.892334   64668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:46:12.892430   64668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:46:12.892551   64668 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:46:12.892563   64668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:46:12.892596   64668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:46:12.892685   64668 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:46:12.892695   64668 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:46:12.892731   64668 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:46:12.892814   64668 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-901761 san=[127.0.0.1 192.168.50.44 localhost minikube old-k8s-version-901761]
	I0717 01:46:13.275065   64668 provision.go:177] copyRemoteCerts
	I0717 01:46:13.275135   64668 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:46:13.275164   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:13.277827   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.278159   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:13.278203   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.278400   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:46:13.278626   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:13.278790   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:46:13.278965   64668 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:46:13.366933   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:46:13.394408   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:46:13.423188   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:46:13.449015   64668 provision.go:87] duration metric: took 563.863228ms to configureAuth
	I0717 01:46:13.449043   64668 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:46:13.449232   64668 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:46:13.449317   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:13.452270   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.452580   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:13.452610   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.452754   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:46:13.452971   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:13.453137   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:13.453323   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:46:13.453485   64668 main.go:141] libmachine: Using SSH client type: native
	I0717 01:46:13.453639   64668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:46:13.453660   64668 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:46:13.725251   64668 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:46:13.725280   64668 main.go:141] libmachine: Checking connection to Docker...
	I0717 01:46:13.725291   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetURL
	I0717 01:46:13.726481   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using libvirt version 6000000
	I0717 01:46:13.728850   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.729200   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:13.729225   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.729414   64668 main.go:141] libmachine: Docker is up and running!
	I0717 01:46:13.729436   64668 main.go:141] libmachine: Reticulating splines...
	I0717 01:46:13.729442   64668 client.go:171] duration metric: took 28.252463476s to LocalClient.Create
	I0717 01:46:13.729467   64668 start.go:167] duration metric: took 28.25254241s to libmachine.API.Create "old-k8s-version-901761"
	I0717 01:46:13.729481   64668 start.go:293] postStartSetup for "old-k8s-version-901761" (driver="kvm2")
	I0717 01:46:13.729497   64668 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:46:13.729517   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:46:13.729794   64668 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:46:13.729828   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:13.731967   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.732262   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:13.732307   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.732364   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:46:13.732522   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:13.732667   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:46:13.732800   64668 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:46:13.817299   64668 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:46:13.821567   64668 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:46:13.821591   64668 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:46:13.821680   64668 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:46:13.821795   64668 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:46:13.821919   64668 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:46:13.831113   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:46:13.857758   64668 start.go:296] duration metric: took 128.261473ms for postStartSetup
	I0717 01:46:13.857824   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetConfigRaw
	I0717 01:46:13.862149   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:46:13.864968   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.865301   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:13.865325   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.865593   64668 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:46:13.880841   64668 start.go:128] duration metric: took 28.425055364s to createHost
	I0717 01:46:13.880881   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:13.883478   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.883848   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:13.883885   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.884076   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:46:13.884288   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:13.884447   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:13.884602   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:46:13.884792   64668 main.go:141] libmachine: Using SSH client type: native
	I0717 01:46:13.884994   64668 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:46:13.885008   64668 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 01:46:13.995342   64668 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721180773.973823223
	
	I0717 01:46:13.995364   64668 fix.go:216] guest clock: 1721180773.973823223
	I0717 01:46:13.995375   64668 fix.go:229] Guest: 2024-07-17 01:46:13.973823223 +0000 UTC Remote: 2024-07-17 01:46:13.880863653 +0000 UTC m=+41.928723400 (delta=92.95957ms)
	I0717 01:46:13.995425   64668 fix.go:200] guest clock delta is within tolerance: 92.95957ms
	I0717 01:46:13.995434   64668 start.go:83] releasing machines lock for "old-k8s-version-901761", held for 28.539808532s
	I0717 01:46:13.995460   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:46:13.995677   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:46:13.998477   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.998843   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:13.998882   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:13.999045   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:46:13.999519   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:46:13.999662   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:46:13.999736   64668 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:46:13.999781   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:13.999840   64668 ssh_runner.go:195] Run: cat /version.json
	I0717 01:46:13.999863   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:46:14.002192   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:14.002416   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:14.002582   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:14.002612   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:14.002787   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:14.002819   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:14.002788   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:46:14.002995   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:14.003020   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:46:14.003112   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:46:14.003163   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:46:14.003280   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:46:14.003349   64668 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:46:14.003656   64668 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:46:14.107043   64668 ssh_runner.go:195] Run: systemctl --version
	I0717 01:46:14.115638   64668 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:46:14.371419   64668 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:46:14.380372   64668 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:46:14.380467   64668 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:46:14.405918   64668 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:46:14.405994   64668 start.go:495] detecting cgroup driver to use...
	I0717 01:46:14.406075   64668 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:46:14.429216   64668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:46:14.448147   64668 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:46:14.448214   64668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:46:14.465123   64668 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:46:14.481448   64668 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:46:14.635607   64668 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:46:14.805584   64668 docker.go:233] disabling docker service ...
	I0717 01:46:14.805638   64668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:46:14.820854   64668 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:46:14.834458   64668 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:46:14.971617   64668 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:46:15.093422   64668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:46:15.108586   64668 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:46:15.129240   64668 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:46:15.129303   64668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:46:15.139963   64668 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:46:15.140046   64668 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:46:15.150211   64668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:46:15.161768   64668 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:46:15.172411   64668 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:46:15.183310   64668 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:46:15.193417   64668 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:46:15.193471   64668 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:46:15.207778   64668 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:46:15.218245   64668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:46:15.346696   64668 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:46:15.512102   64668 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:46:15.512157   64668 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:46:15.518160   64668 start.go:563] Will wait 60s for crictl version
	I0717 01:46:15.518218   64668 ssh_runner.go:195] Run: which crictl
	I0717 01:46:15.524389   64668 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:46:15.583072   64668 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:46:15.583149   64668 ssh_runner.go:195] Run: crio --version
	I0717 01:46:15.622842   64668 ssh_runner.go:195] Run: crio --version
	I0717 01:46:15.666404   64668 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:46:15.667602   64668 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:46:15.670939   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:15.671450   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:46:00 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:46:15.671481   64668 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:46:15.671827   64668 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 01:46:15.677353   64668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:46:15.694711   64668 kubeadm.go:883] updating cluster {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:46:15.694850   64668 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:46:15.694958   64668 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:46:15.739715   64668 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:46:15.739799   64668 ssh_runner.go:195] Run: which lz4
	I0717 01:46:15.744960   64668 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 01:46:15.750485   64668 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:46:15.750516   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:46:17.652671   64668 crio.go:462] duration metric: took 1.907747596s to copy over tarball
	I0717 01:46:17.652831   64668 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:46:20.411649   64668 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.758772933s)
	I0717 01:46:20.411678   64668 crio.go:469] duration metric: took 2.75891114s to extract the tarball
	I0717 01:46:20.411686   64668 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:46:20.456428   64668 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:46:20.512331   64668 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:46:20.512353   64668 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:46:20.512415   64668 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:46:20.512445   64668 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:46:20.512466   64668 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:46:20.512425   64668 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:46:20.512491   64668 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:46:20.512422   64668 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:46:20.512474   64668 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:46:20.512473   64668 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:46:20.514012   64668 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:46:20.514035   64668 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:46:20.514023   64668 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:46:20.514063   64668 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:46:20.514014   64668 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:46:20.514015   64668 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:46:20.514104   64668 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:46:20.514161   64668 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:46:20.671524   64668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:46:20.680806   64668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:46:20.698516   64668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:46:20.699369   64668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:46:20.700420   64668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:46:20.702410   64668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:46:20.730701   64668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:46:20.779553   64668 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:46:20.779660   64668 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:46:20.779725   64668 ssh_runner.go:195] Run: which crictl
	I0717 01:46:20.822185   64668 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:46:20.822237   64668 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:46:20.822291   64668 ssh_runner.go:195] Run: which crictl
	I0717 01:46:20.867670   64668 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:46:20.867717   64668 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:46:20.867743   64668 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:46:20.867776   64668 ssh_runner.go:195] Run: which crictl
	I0717 01:46:20.867774   64668 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:46:20.867912   64668 ssh_runner.go:195] Run: which crictl
	I0717 01:46:20.867801   64668 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:46:20.867983   64668 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:46:20.867862   64668 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:46:20.868009   64668 ssh_runner.go:195] Run: which crictl
	I0717 01:46:20.868026   64668 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:46:20.868069   64668 ssh_runner.go:195] Run: which crictl
	I0717 01:46:20.883302   64668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:46:20.883338   64668 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:46:20.883358   64668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:46:20.883371   64668 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:46:20.883398   64668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:46:20.883401   64668 ssh_runner.go:195] Run: which crictl
	I0717 01:46:20.883986   64668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:46:20.884061   64668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:46:20.884861   64668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:46:21.025372   64668 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:46:21.025400   64668 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:46:21.025444   64668 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:46:21.025487   64668 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:46:21.025554   64668 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:46:21.025604   64668 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:46:21.025648   64668 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:46:21.063305   64668 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:46:21.806335   64668 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:46:21.949840   64668 cache_images.go:92] duration metric: took 1.437470927s to LoadCachedImages
	W0717 01:46:21.949965   64668 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0717 01:46:21.949983   64668 kubeadm.go:934] updating node { 192.168.50.44 8443 v1.20.0 crio true true} ...
	I0717 01:46:21.950099   64668 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-901761 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:46:21.950184   64668 ssh_runner.go:195] Run: crio config
	I0717 01:46:22.011370   64668 cni.go:84] Creating CNI manager for ""
	I0717 01:46:22.011399   64668 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:46:22.011414   64668 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:46:22.011450   64668 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.44 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901761 NodeName:old-k8s-version-901761 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:46:22.011615   64668 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-901761"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:46:22.011685   64668 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:46:22.023637   64668 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:46:22.023703   64668 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:46:22.035427   64668 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0717 01:46:22.054641   64668 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:46:22.074540   64668 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0717 01:46:22.096500   64668 ssh_runner.go:195] Run: grep 192.168.50.44	control-plane.minikube.internal$ /etc/hosts
	I0717 01:46:22.101558   64668 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.44	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:46:22.114508   64668 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:46:22.247773   64668 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:46:22.269026   64668 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761 for IP: 192.168.50.44
	I0717 01:46:22.269052   64668 certs.go:194] generating shared ca certs ...
	I0717 01:46:22.269067   64668 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:46:22.269203   64668 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:46:22.269241   64668 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:46:22.269249   64668 certs.go:256] generating profile certs ...
	I0717 01:46:22.269295   64668 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.key
	I0717 01:46:22.269308   64668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.crt with IP's: []
	I0717 01:46:22.634015   64668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.crt ...
	I0717 01:46:22.634056   64668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.crt: {Name:mk22d52e2eee694141b8e52a4c5696978ee5e5cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:46:22.634261   64668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.key ...
	I0717 01:46:22.634279   64668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.key: {Name:mk3362e7f7eac75d8cdde5aa87fe8c540e0a473d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:46:22.634364   64668 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5
	I0717 01:46:22.634381   64668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt.f41162e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.44]
	I0717 01:46:22.744463   64668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt.f41162e5 ...
	I0717 01:46:22.744495   64668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt.f41162e5: {Name:mkeac2c7580c10156b32edbc5217cc1477c3bdf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:46:22.746646   64668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5 ...
	I0717 01:46:22.746675   64668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5: {Name:mkb29240c1afe356d078a465af7526b92e02c5b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:46:22.746801   64668 certs.go:381] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt.f41162e5 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt
	I0717 01:46:22.746905   64668 certs.go:385] copying /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5 -> /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key
	I0717 01:46:22.746976   64668 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key
	I0717 01:46:22.746995   64668 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt with IP's: []
	I0717 01:46:22.929984   64668 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt ...
	I0717 01:46:22.930011   64668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt: {Name:mkf59a9cf8df71e1db0ddd7bd150c8ae5df3748d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:46:22.930196   64668 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key ...
	I0717 01:46:22.930212   64668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key: {Name:mkf9d8ed48db6a4d09aae71eae2edf26e71e94f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:46:22.930422   64668 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:46:22.930457   64668 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:46:22.930467   64668 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:46:22.930485   64668 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:46:22.930504   64668 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:46:22.930532   64668 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:46:22.930580   64668 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:46:22.931098   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:46:22.962619   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:46:22.991839   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:46:23.018942   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:46:23.046485   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:46:23.072994   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:46:23.099309   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:46:23.127979   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:46:23.194954   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:46:23.231769   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:46:23.261566   64668 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:46:23.291466   64668 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:46:23.314020   64668 ssh_runner.go:195] Run: openssl version
	I0717 01:46:23.320058   64668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:46:23.330969   64668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:46:23.335702   64668 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:46:23.335770   64668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:46:23.342046   64668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:46:23.352860   64668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:46:23.363797   64668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:46:23.368598   64668 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:46:23.368653   64668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:46:23.374411   64668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:46:23.385551   64668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:46:23.396209   64668 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:46:23.400978   64668 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:46:23.401041   64668 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:46:23.407314   64668 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:46:23.418227   64668 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:46:23.423509   64668 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 01:46:23.423555   64668 kubeadm.go:392] StartCluster: {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:46:23.423642   64668 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:46:23.423692   64668 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:46:23.462616   64668 cri.go:89] found id: ""
	I0717 01:46:23.462695   64668 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:46:23.473094   64668 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:46:23.483479   64668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:46:23.496634   64668 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:46:23.496658   64668 kubeadm.go:157] found existing configuration files:
	
	I0717 01:46:23.496713   64668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:46:23.507349   64668 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:46:23.507417   64668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:46:23.523682   64668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:46:23.534743   64668 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:46:23.534816   64668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:46:23.544815   64668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:46:23.554877   64668 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:46:23.554952   64668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:46:23.567714   64668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:46:23.580020   64668 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:46:23.580098   64668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:46:23.591331   64668 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:46:23.708042   64668 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:46:23.708126   64668 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:46:23.856499   64668 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:46:23.856692   64668 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:46:23.856831   64668 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:46:24.111187   64668 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:46:24.113883   64668 out.go:204]   - Generating certificates and keys ...
	I0717 01:46:24.114019   64668 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:46:24.114129   64668 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:46:24.418936   64668 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 01:46:24.573278   64668 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 01:46:24.733305   64668 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 01:46:24.982668   64668 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 01:46:25.094937   64668 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 01:46:25.095446   64668 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-901761] and IPs [192.168.50.44 127.0.0.1 ::1]
	I0717 01:46:25.294917   64668 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 01:46:25.295337   64668 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-901761] and IPs [192.168.50.44 127.0.0.1 ::1]
	I0717 01:46:25.426768   64668 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 01:46:25.755365   64668 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 01:46:25.917283   64668 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 01:46:25.917582   64668 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:46:26.298878   64668 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:46:26.652916   64668 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:46:26.785974   64668 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:46:27.275401   64668 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:46:27.299490   64668 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:46:27.299616   64668 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:46:27.299662   64668 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:46:27.482655   64668 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:46:27.484437   64668 out.go:204]   - Booting up control plane ...
	I0717 01:46:27.484540   64668 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:46:27.502686   64668 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:46:27.504660   64668 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:46:27.506063   64668 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:46:27.511744   64668 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:47:07.508464   64668 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:47:07.508707   64668 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:47:07.508895   64668 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:47:12.509285   64668 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:47:12.509542   64668 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:47:22.509225   64668 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:47:22.509508   64668 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:47:42.509381   64668 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:47:42.509558   64668 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:48:22.512006   64668 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:48:22.512430   64668 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:48:22.512441   64668 kubeadm.go:310] 
	I0717 01:48:22.512554   64668 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:48:22.512661   64668 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:48:22.512676   64668 kubeadm.go:310] 
	I0717 01:48:22.512757   64668 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:48:22.512838   64668 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:48:22.513084   64668 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:48:22.513103   64668 kubeadm.go:310] 
	I0717 01:48:22.513349   64668 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:48:22.513433   64668 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:48:22.513511   64668 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:48:22.513524   64668 kubeadm.go:310] 
	I0717 01:48:22.513771   64668 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:48:22.513991   64668 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:48:22.514037   64668 kubeadm.go:310] 
	I0717 01:48:22.514361   64668 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:48:22.514532   64668 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:48:22.514753   64668 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:48:22.515007   64668 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:48:22.515027   64668 kubeadm.go:310] 
	I0717 01:48:22.515460   64668 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:48:22.515613   64668 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:48:22.515713   64668 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0717 01:48:22.515866   64668 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-901761] and IPs [192.168.50.44 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-901761] and IPs [192.168.50.44 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-901761] and IPs [192.168.50.44 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-901761] and IPs [192.168.50.44 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 01:48:22.515908   64668 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 01:48:23.264997   64668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:48:23.279167   64668 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:48:23.289273   64668 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:48:23.289292   64668 kubeadm.go:157] found existing configuration files:
	
	I0717 01:48:23.289338   64668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:48:23.298794   64668 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:48:23.298856   64668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:48:23.309084   64668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:48:23.318366   64668 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:48:23.318416   64668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:48:23.328684   64668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:48:23.337772   64668 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:48:23.337820   64668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:48:23.357069   64668 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:48:23.367399   64668 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:48:23.367468   64668 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:48:23.376850   64668 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 01:48:23.450173   64668 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 01:48:23.450296   64668 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 01:48:23.594766   64668 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 01:48:23.594882   64668 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 01:48:23.595002   64668 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 01:48:23.789844   64668 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 01:48:23.791777   64668 out.go:204]   - Generating certificates and keys ...
	I0717 01:48:23.791871   64668 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 01:48:23.791926   64668 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 01:48:23.792024   64668 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 01:48:23.792116   64668 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 01:48:23.792222   64668 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 01:48:23.792284   64668 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 01:48:23.792370   64668 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 01:48:23.792443   64668 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 01:48:23.792815   64668 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 01:48:23.793730   64668 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 01:48:23.793900   64668 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 01:48:23.793979   64668 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 01:48:24.088771   64668 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 01:48:24.280820   64668 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 01:48:24.553826   64668 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 01:48:24.708537   64668 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 01:48:24.722925   64668 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 01:48:24.725141   64668 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 01:48:24.725335   64668 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 01:48:24.864767   64668 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 01:48:24.866562   64668 out.go:204]   - Booting up control plane ...
	I0717 01:48:24.866689   64668 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 01:48:24.875144   64668 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 01:48:24.875985   64668 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 01:48:24.876813   64668 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 01:48:24.878932   64668 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 01:49:04.882131   64668 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 01:49:04.882493   64668 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:49:04.882749   64668 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:49:09.883393   64668 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:49:09.883579   64668 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:49:19.883417   64668 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:49:19.883676   64668 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:49:39.882270   64668 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:49:39.882480   64668 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:50:19.881693   64668 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 01:50:19.881854   64668 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 01:50:19.881863   64668 kubeadm.go:310] 
	I0717 01:50:19.881903   64668 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 01:50:19.881933   64668 kubeadm.go:310] 		timed out waiting for the condition
	I0717 01:50:19.881960   64668 kubeadm.go:310] 
	I0717 01:50:19.882046   64668 kubeadm.go:310] 	This error is likely caused by:
	I0717 01:50:19.882091   64668 kubeadm.go:310] 		- The kubelet is not running
	I0717 01:50:19.882188   64668 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 01:50:19.882212   64668 kubeadm.go:310] 
	I0717 01:50:19.882353   64668 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 01:50:19.882394   64668 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 01:50:19.882430   64668 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 01:50:19.882439   64668 kubeadm.go:310] 
	I0717 01:50:19.882580   64668 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 01:50:19.882673   64668 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 01:50:19.882681   64668 kubeadm.go:310] 
	I0717 01:50:19.882827   64668 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 01:50:19.882918   64668 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 01:50:19.882976   64668 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 01:50:19.883035   64668 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 01:50:19.883041   64668 kubeadm.go:310] 
	I0717 01:50:19.884015   64668 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 01:50:19.884098   64668 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 01:50:19.884195   64668 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 01:50:19.884253   64668 kubeadm.go:394] duration metric: took 3m56.460697669s to StartCluster
	I0717 01:50:19.884289   64668 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:50:19.884335   64668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:50:19.924737   64668 cri.go:89] found id: ""
	I0717 01:50:19.924764   64668 logs.go:276] 0 containers: []
	W0717 01:50:19.924772   64668 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:50:19.924778   64668 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:50:19.924827   64668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:50:19.973161   64668 cri.go:89] found id: ""
	I0717 01:50:19.973188   64668 logs.go:276] 0 containers: []
	W0717 01:50:19.973198   64668 logs.go:278] No container was found matching "etcd"
	I0717 01:50:19.973205   64668 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:50:19.973262   64668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:50:20.018848   64668 cri.go:89] found id: ""
	I0717 01:50:20.018878   64668 logs.go:276] 0 containers: []
	W0717 01:50:20.018889   64668 logs.go:278] No container was found matching "coredns"
	I0717 01:50:20.018896   64668 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:50:20.018955   64668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:50:20.053365   64668 cri.go:89] found id: ""
	I0717 01:50:20.053387   64668 logs.go:276] 0 containers: []
	W0717 01:50:20.053396   64668 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:50:20.053401   64668 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:50:20.053447   64668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:50:20.086729   64668 cri.go:89] found id: ""
	I0717 01:50:20.086752   64668 logs.go:276] 0 containers: []
	W0717 01:50:20.086760   64668 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:50:20.086764   64668 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:50:20.086810   64668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:50:20.121028   64668 cri.go:89] found id: ""
	I0717 01:50:20.121055   64668 logs.go:276] 0 containers: []
	W0717 01:50:20.121062   64668 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:50:20.121068   64668 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:50:20.121117   64668 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:50:20.155203   64668 cri.go:89] found id: ""
	I0717 01:50:20.155223   64668 logs.go:276] 0 containers: []
	W0717 01:50:20.155230   64668 logs.go:278] No container was found matching "kindnet"
	I0717 01:50:20.155239   64668 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:50:20.155253   64668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:50:20.253015   64668 logs.go:123] Gathering logs for container status ...
	I0717 01:50:20.253050   64668 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:50:20.295010   64668 logs.go:123] Gathering logs for kubelet ...
	I0717 01:50:20.295042   64668 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:50:20.346041   64668 logs.go:123] Gathering logs for dmesg ...
	I0717 01:50:20.346071   64668 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:50:20.359652   64668 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:50:20.359677   64668 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:50:20.482145   64668 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0717 01:50:20.482191   64668 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 01:50:20.482234   64668 out.go:239] * 
	* 
	W0717 01:50:20.482289   64668 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:50:20.482322   64668 out.go:239] * 
	* 
	W0717 01:50:20.483212   64668 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:50:20.486125   64668 out.go:177] 
	W0717 01:50:20.487385   64668 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 01:50:20.487425   64668 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 01:50:20.487449   64668 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 01:50:20.489120   64668 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-901761 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 6 (223.79239ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:50:20.754020   71001 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-901761" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (288.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-940222 --alsologtostderr -v=3
E0717 01:47:58.379484   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 01:47:59.045661   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:47:59.050953   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:47:59.061223   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:47:59.081572   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:47:59.121951   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:47:59.202273   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:47:59.362633   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:47:59.683333   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:48:00.324350   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:48:00.388718   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:00.393996   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:00.404226   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:00.424490   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:00.464903   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:00.545223   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:00.705694   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:01.026390   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:01.604656   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:48:01.667069   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:02.947737   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:04.164992   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:48:05.508027   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:09.285837   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:48:10.629218   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:48:19.527013   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-940222 --alsologtostderr -v=3: exit status 82 (2m0.458922675s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-940222"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:47:51.365324   70074 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:47:51.365573   70074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:47:51.365582   70074 out.go:304] Setting ErrFile to fd 2...
	I0717 01:47:51.365587   70074 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:47:51.365802   70074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:47:51.366069   70074 out.go:298] Setting JSON to false
	I0717 01:47:51.366172   70074 mustload.go:65] Loading cluster: embed-certs-940222
	I0717 01:47:51.366501   70074 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:47:51.366603   70074 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/config.json ...
	I0717 01:47:51.366791   70074 mustload.go:65] Loading cluster: embed-certs-940222
	I0717 01:47:51.366919   70074 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:47:51.366953   70074 stop.go:39] StopHost: embed-certs-940222
	I0717 01:47:51.367412   70074 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:47:51.367453   70074 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:47:51.383507   70074 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I0717 01:47:51.384011   70074 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:47:51.384714   70074 main.go:141] libmachine: Using API Version  1
	I0717 01:47:51.384752   70074 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:47:51.385105   70074 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:47:51.387734   70074 out.go:177] * Stopping node "embed-certs-940222"  ...
	I0717 01:47:51.389224   70074 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 01:47:51.389245   70074 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:47:51.389498   70074 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 01:47:51.389523   70074 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:47:51.392841   70074 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:47:51.393390   70074 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:46:54 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:47:51.393420   70074 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:47:51.393565   70074 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:47:51.393749   70074 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:47:51.393933   70074 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:47:51.394072   70074 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:47:51.492928   70074 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 01:47:51.560031   70074 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 01:47:51.581344   70074 main.go:141] libmachine: Stopping "embed-certs-940222"...
	I0717 01:47:51.581404   70074 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:47:51.583399   70074 main.go:141] libmachine: (embed-certs-940222) Calling .Stop
	I0717 01:47:51.587602   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 0/120
	I0717 01:47:52.589460   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 1/120
	I0717 01:47:53.591742   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 2/120
	I0717 01:47:54.593212   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 3/120
	I0717 01:47:55.594741   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 4/120
	I0717 01:47:56.596841   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 5/120
	I0717 01:47:57.598226   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 6/120
	I0717 01:47:58.599561   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 7/120
	I0717 01:47:59.601002   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 8/120
	I0717 01:48:00.602344   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 9/120
	I0717 01:48:01.604530   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 10/120
	I0717 01:48:02.605833   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 11/120
	I0717 01:48:03.607660   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 12/120
	I0717 01:48:04.609027   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 13/120
	I0717 01:48:05.610372   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 14/120
	I0717 01:48:06.612412   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 15/120
	I0717 01:48:07.614338   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 16/120
	I0717 01:48:08.615938   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 17/120
	I0717 01:48:09.617457   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 18/120
	I0717 01:48:10.618904   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 19/120
	I0717 01:48:11.620880   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 20/120
	I0717 01:48:12.622381   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 21/120
	I0717 01:48:13.623969   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 22/120
	I0717 01:48:14.625931   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 23/120
	I0717 01:48:15.627365   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 24/120
	I0717 01:48:16.629476   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 25/120
	I0717 01:48:17.630599   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 26/120
	I0717 01:48:18.632152   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 27/120
	I0717 01:48:19.634469   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 28/120
	I0717 01:48:20.635628   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 29/120
	I0717 01:48:21.637675   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 30/120
	I0717 01:48:22.639141   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 31/120
	I0717 01:48:23.641141   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 32/120
	I0717 01:48:24.643311   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 33/120
	I0717 01:48:25.644912   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 34/120
	I0717 01:48:26.646542   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 35/120
	I0717 01:48:27.648130   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 36/120
	I0717 01:48:28.649458   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 37/120
	I0717 01:48:29.651068   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 38/120
	I0717 01:48:30.652867   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 39/120
	I0717 01:48:31.654200   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 40/120
	I0717 01:48:32.655579   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 41/120
	I0717 01:48:33.657194   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 42/120
	I0717 01:48:34.658391   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 43/120
	I0717 01:48:35.659681   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 44/120
	I0717 01:48:36.661469   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 45/120
	I0717 01:48:37.662921   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 46/120
	I0717 01:48:38.664302   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 47/120
	I0717 01:48:39.665807   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 48/120
	I0717 01:48:40.667295   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 49/120
	I0717 01:48:41.669454   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 50/120
	I0717 01:48:42.670943   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 51/120
	I0717 01:48:43.672397   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 52/120
	I0717 01:48:44.673949   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 53/120
	I0717 01:48:45.675299   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 54/120
	I0717 01:48:46.677275   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 55/120
	I0717 01:48:47.679022   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 56/120
	I0717 01:48:48.681122   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 57/120
	I0717 01:48:49.682442   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 58/120
	I0717 01:48:50.683848   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 59/120
	I0717 01:48:51.686064   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 60/120
	I0717 01:48:52.687414   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 61/120
	I0717 01:48:53.688785   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 62/120
	I0717 01:48:54.690069   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 63/120
	I0717 01:48:55.691438   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 64/120
	I0717 01:48:56.693475   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 65/120
	I0717 01:48:57.694822   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 66/120
	I0717 01:48:58.696090   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 67/120
	I0717 01:48:59.697428   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 68/120
	I0717 01:49:00.698718   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 69/120
	I0717 01:49:01.700926   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 70/120
	I0717 01:49:02.702252   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 71/120
	I0717 01:49:03.703618   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 72/120
	I0717 01:49:04.704942   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 73/120
	I0717 01:49:05.706320   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 74/120
	I0717 01:49:06.708759   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 75/120
	I0717 01:49:07.710026   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 76/120
	I0717 01:49:08.711413   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 77/120
	I0717 01:49:09.712700   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 78/120
	I0717 01:49:10.714064   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 79/120
	I0717 01:49:11.716442   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 80/120
	I0717 01:49:12.717708   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 81/120
	I0717 01:49:13.719220   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 82/120
	I0717 01:49:14.720455   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 83/120
	I0717 01:49:15.722405   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 84/120
	I0717 01:49:16.724393   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 85/120
	I0717 01:49:17.725961   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 86/120
	I0717 01:49:18.727240   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 87/120
	I0717 01:49:19.728664   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 88/120
	I0717 01:49:20.730024   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 89/120
	I0717 01:49:21.732458   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 90/120
	I0717 01:49:22.733833   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 91/120
	I0717 01:49:23.735329   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 92/120
	I0717 01:49:24.736619   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 93/120
	I0717 01:49:25.737965   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 94/120
	I0717 01:49:26.739882   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 95/120
	I0717 01:49:27.741227   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 96/120
	I0717 01:49:28.742481   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 97/120
	I0717 01:49:29.743701   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 98/120
	I0717 01:49:30.744945   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 99/120
	I0717 01:49:31.747120   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 100/120
	I0717 01:49:32.748380   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 101/120
	I0717 01:49:33.749614   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 102/120
	I0717 01:49:34.750967   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 103/120
	I0717 01:49:35.752374   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 104/120
	I0717 01:49:36.754309   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 105/120
	I0717 01:49:37.755826   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 106/120
	I0717 01:49:38.756951   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 107/120
	I0717 01:49:39.758226   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 108/120
	I0717 01:49:40.759435   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 109/120
	I0717 01:49:41.761536   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 110/120
	I0717 01:49:42.762873   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 111/120
	I0717 01:49:43.765015   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 112/120
	I0717 01:49:44.766347   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 113/120
	I0717 01:49:45.767772   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 114/120
	I0717 01:49:46.769623   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 115/120
	I0717 01:49:47.770900   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 116/120
	I0717 01:49:48.772227   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 117/120
	I0717 01:49:49.773354   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 118/120
	I0717 01:49:50.774668   70074 main.go:141] libmachine: (embed-certs-940222) Waiting for machine to stop 119/120
	I0717 01:49:51.775020   70074 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 01:49:51.775087   70074 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 01:49:51.776935   70074 out.go:177] 
	W0717 01:49:51.778152   70074 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 01:49:51.778165   70074 out.go:239] * 
	* 
	W0717 01:49:51.780892   70074 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:49:51.782094   70074 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-940222 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-940222 -n embed-certs-940222
E0717 01:49:59.502978   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:50:03.264779   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:03.270029   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:03.280269   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:03.300577   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:03.340869   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:03.421298   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:03.581599   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:03.902179   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:04.542399   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:05.822590   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:08.383104   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-940222 -n embed-certs-940222: exit status 3 (18.61153589s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:50:10.394842   70805 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.225:22: connect: no route to host
	E0717 01:50:10.394861   70805 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.225:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-940222" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-738184 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-738184 --alsologtostderr -v=3: exit status 82 (2m0.506030667s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-738184"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:48:34.025144   70407 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:48:34.025382   70407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:48:34.025390   70407 out.go:304] Setting ErrFile to fd 2...
	I0717 01:48:34.025394   70407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:48:34.025592   70407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:48:34.025819   70407 out.go:298] Setting JSON to false
	I0717 01:48:34.025888   70407 mustload.go:65] Loading cluster: default-k8s-diff-port-738184
	I0717 01:48:34.026196   70407 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:48:34.026260   70407 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/config.json ...
	I0717 01:48:34.026426   70407 mustload.go:65] Loading cluster: default-k8s-diff-port-738184
	I0717 01:48:34.026529   70407 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:48:34.026582   70407 stop.go:39] StopHost: default-k8s-diff-port-738184
	I0717 01:48:34.026984   70407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:48:34.027025   70407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:48:34.041701   70407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38701
	I0717 01:48:34.042202   70407 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:48:34.042811   70407 main.go:141] libmachine: Using API Version  1
	I0717 01:48:34.042839   70407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:48:34.043207   70407 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:48:34.045785   70407 out.go:177] * Stopping node "default-k8s-diff-port-738184"  ...
	I0717 01:48:34.047300   70407 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 01:48:34.047336   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:48:34.047641   70407 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 01:48:34.047671   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:48:34.050963   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:48:34.051462   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:47:40 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:48:34.051504   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:48:34.051601   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:48:34.051782   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:48:34.051934   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:48:34.052083   70407 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:48:34.174089   70407 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 01:48:34.234129   70407 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 01:48:34.294189   70407 main.go:141] libmachine: Stopping "default-k8s-diff-port-738184"...
	I0717 01:48:34.294231   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:48:34.295878   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Stop
	I0717 01:48:34.299083   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 0/120
	I0717 01:48:35.300450   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 1/120
	I0717 01:48:36.301758   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 2/120
	I0717 01:48:37.303106   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 3/120
	I0717 01:48:38.304624   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 4/120
	I0717 01:48:39.306442   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 5/120
	I0717 01:48:40.307807   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 6/120
	I0717 01:48:41.309072   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 7/120
	I0717 01:48:42.310627   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 8/120
	I0717 01:48:43.311949   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 9/120
	I0717 01:48:44.313900   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 10/120
	I0717 01:48:45.315324   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 11/120
	I0717 01:48:46.316682   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 12/120
	I0717 01:48:47.318105   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 13/120
	I0717 01:48:48.319397   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 14/120
	I0717 01:48:49.321601   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 15/120
	I0717 01:48:50.323046   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 16/120
	I0717 01:48:51.324490   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 17/120
	I0717 01:48:52.326082   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 18/120
	I0717 01:48:53.327490   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 19/120
	I0717 01:48:54.329023   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 20/120
	I0717 01:48:55.330355   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 21/120
	I0717 01:48:56.331714   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 22/120
	I0717 01:48:57.333020   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 23/120
	I0717 01:48:58.334379   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 24/120
	I0717 01:48:59.336376   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 25/120
	I0717 01:49:00.337778   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 26/120
	I0717 01:49:01.339258   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 27/120
	I0717 01:49:02.340612   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 28/120
	I0717 01:49:03.342046   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 29/120
	I0717 01:49:04.344061   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 30/120
	I0717 01:49:05.345398   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 31/120
	I0717 01:49:06.346743   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 32/120
	I0717 01:49:07.347991   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 33/120
	I0717 01:49:08.349362   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 34/120
	I0717 01:49:09.351350   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 35/120
	I0717 01:49:10.352735   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 36/120
	I0717 01:49:11.354031   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 37/120
	I0717 01:49:12.355760   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 38/120
	I0717 01:49:13.357131   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 39/120
	I0717 01:49:14.359269   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 40/120
	I0717 01:49:15.360541   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 41/120
	I0717 01:49:16.361924   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 42/120
	I0717 01:49:17.363240   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 43/120
	I0717 01:49:18.364655   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 44/120
	I0717 01:49:19.366699   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 45/120
	I0717 01:49:20.368149   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 46/120
	I0717 01:49:21.369502   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 47/120
	I0717 01:49:22.370953   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 48/120
	I0717 01:49:23.372423   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 49/120
	I0717 01:49:24.374537   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 50/120
	I0717 01:49:25.375923   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 51/120
	I0717 01:49:26.377147   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 52/120
	I0717 01:49:27.378461   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 53/120
	I0717 01:49:28.379625   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 54/120
	I0717 01:49:29.381375   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 55/120
	I0717 01:49:30.383119   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 56/120
	I0717 01:49:31.384792   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 57/120
	I0717 01:49:32.386128   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 58/120
	I0717 01:49:33.387409   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 59/120
	I0717 01:49:34.389536   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 60/120
	I0717 01:49:35.390786   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 61/120
	I0717 01:49:36.393068   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 62/120
	I0717 01:49:37.394363   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 63/120
	I0717 01:49:38.395909   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 64/120
	I0717 01:49:39.397597   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 65/120
	I0717 01:49:40.399452   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 66/120
	I0717 01:49:41.400684   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 67/120
	I0717 01:49:42.402580   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 68/120
	I0717 01:49:43.403815   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 69/120
	I0717 01:49:44.405985   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 70/120
	I0717 01:49:45.407230   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 71/120
	I0717 01:49:46.408550   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 72/120
	I0717 01:49:47.409831   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 73/120
	I0717 01:49:48.411246   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 74/120
	I0717 01:49:49.413155   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 75/120
	I0717 01:49:50.414638   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 76/120
	I0717 01:49:51.415975   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 77/120
	I0717 01:49:52.417333   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 78/120
	I0717 01:49:53.418610   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 79/120
	I0717 01:49:54.420696   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 80/120
	I0717 01:49:55.422014   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 81/120
	I0717 01:49:56.423197   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 82/120
	I0717 01:49:57.424415   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 83/120
	I0717 01:49:58.425685   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 84/120
	I0717 01:49:59.427605   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 85/120
	I0717 01:50:00.428934   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 86/120
	I0717 01:50:01.430295   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 87/120
	I0717 01:50:02.431975   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 88/120
	I0717 01:50:03.433122   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 89/120
	I0717 01:50:04.435095   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 90/120
	I0717 01:50:05.436842   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 91/120
	I0717 01:50:06.437973   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 92/120
	I0717 01:50:07.439564   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 93/120
	I0717 01:50:08.440798   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 94/120
	I0717 01:50:09.442678   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 95/120
	I0717 01:50:10.443856   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 96/120
	I0717 01:50:11.445198   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 97/120
	I0717 01:50:12.446488   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 98/120
	I0717 01:50:13.447737   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 99/120
	I0717 01:50:14.449766   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 100/120
	I0717 01:50:15.451268   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 101/120
	I0717 01:50:16.452539   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 102/120
	I0717 01:50:17.453832   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 103/120
	I0717 01:50:18.455108   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 104/120
	I0717 01:50:19.456834   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 105/120
	I0717 01:50:20.458616   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 106/120
	I0717 01:50:21.459814   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 107/120
	I0717 01:50:22.461094   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 108/120
	I0717 01:50:23.462465   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 109/120
	I0717 01:50:24.464728   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 110/120
	I0717 01:50:25.466094   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 111/120
	I0717 01:50:26.467529   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 112/120
	I0717 01:50:27.468997   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 113/120
	I0717 01:50:28.470405   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 114/120
	I0717 01:50:29.472361   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 115/120
	I0717 01:50:30.473662   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 116/120
	I0717 01:50:31.475185   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 117/120
	I0717 01:50:32.476573   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 118/120
	I0717 01:50:33.477881   70407 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for machine to stop 119/120
	I0717 01:50:34.479089   70407 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 01:50:34.479149   70407 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 01:50:34.481333   70407 out.go:177] 
	W0717 01:50:34.482861   70407 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 01:50:34.482881   70407 out.go:239] * 
	* 
	W0717 01:50:34.486643   70407 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:50:34.488069   70407 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-738184 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184
E0717 01:50:35.381715   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184: exit status 3 (18.660531442s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:50:53.150810   71219 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	E0717 01:50:53.150827   71219 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-738184" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-391501 --alsologtostderr -v=3
E0717 01:48:40.008180   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:48:41.350434   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:49:20.968793   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:49:22.310597   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:49:39.021563   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:39.026792   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:39.037033   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:39.057372   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:39.098423   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:39.178755   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:39.339130   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:39.660127   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:40.300887   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:41.581077   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:44.141744   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:49:49.262314   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-391501 --alsologtostderr -v=3: exit status 82 (2m0.475506296s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-391501"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:48:38.757340   70492 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:48:38.757470   70492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:48:38.757482   70492 out.go:304] Setting ErrFile to fd 2...
	I0717 01:48:38.757488   70492 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:48:38.757658   70492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:48:38.757869   70492 out.go:298] Setting JSON to false
	I0717 01:48:38.757938   70492 mustload.go:65] Loading cluster: no-preload-391501
	I0717 01:48:38.758233   70492 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:48:38.758293   70492 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/config.json ...
	I0717 01:48:38.758461   70492 mustload.go:65] Loading cluster: no-preload-391501
	I0717 01:48:38.758583   70492 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:48:38.758620   70492 stop.go:39] StopHost: no-preload-391501
	I0717 01:48:38.758988   70492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:48:38.759041   70492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:48:38.773184   70492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44733
	I0717 01:48:38.773561   70492 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:48:38.774076   70492 main.go:141] libmachine: Using API Version  1
	I0717 01:48:38.774101   70492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:48:38.774396   70492 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:48:38.777592   70492 out.go:177] * Stopping node "no-preload-391501"  ...
	I0717 01:48:38.778797   70492 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 01:48:38.778822   70492 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:48:38.779021   70492 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 01:48:38.779044   70492 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:48:38.781669   70492 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:48:38.782123   70492 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:46:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:48:38.782162   70492 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:48:38.782297   70492 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:48:38.782461   70492 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:48:38.782631   70492 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:48:38.782734   70492 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:48:38.882355   70492 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 01:48:38.941578   70492 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 01:48:38.999869   70492 main.go:141] libmachine: Stopping "no-preload-391501"...
	I0717 01:48:38.999926   70492 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 01:48:39.001347   70492 main.go:141] libmachine: (no-preload-391501) Calling .Stop
	I0717 01:48:39.004459   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 0/120
	I0717 01:48:40.005917   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 1/120
	I0717 01:48:41.007434   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 2/120
	I0717 01:48:42.008766   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 3/120
	I0717 01:48:43.010266   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 4/120
	I0717 01:48:44.011534   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 5/120
	I0717 01:48:45.013099   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 6/120
	I0717 01:48:46.014469   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 7/120
	I0717 01:48:47.015801   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 8/120
	I0717 01:48:48.017077   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 9/120
	I0717 01:48:49.018499   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 10/120
	I0717 01:48:50.019877   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 11/120
	I0717 01:48:51.021414   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 12/120
	I0717 01:48:52.022833   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 13/120
	I0717 01:48:53.024247   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 14/120
	I0717 01:48:54.026208   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 15/120
	I0717 01:48:55.027595   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 16/120
	I0717 01:48:56.029066   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 17/120
	I0717 01:48:57.030384   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 18/120
	I0717 01:48:58.031721   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 19/120
	I0717 01:48:59.033944   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 20/120
	I0717 01:49:00.035348   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 21/120
	I0717 01:49:01.036656   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 22/120
	I0717 01:49:02.037983   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 23/120
	I0717 01:49:03.039452   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 24/120
	I0717 01:49:04.041383   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 25/120
	I0717 01:49:05.042883   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 26/120
	I0717 01:49:06.044240   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 27/120
	I0717 01:49:07.045582   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 28/120
	I0717 01:49:08.046982   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 29/120
	I0717 01:49:09.049104   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 30/120
	I0717 01:49:10.050432   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 31/120
	I0717 01:49:11.051805   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 32/120
	I0717 01:49:12.053326   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 33/120
	I0717 01:49:13.054678   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 34/120
	I0717 01:49:14.056363   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 35/120
	I0717 01:49:15.057674   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 36/120
	I0717 01:49:16.058885   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 37/120
	I0717 01:49:17.060397   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 38/120
	I0717 01:49:18.061604   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 39/120
	I0717 01:49:19.063867   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 40/120
	I0717 01:49:20.065223   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 41/120
	I0717 01:49:21.066622   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 42/120
	I0717 01:49:22.068163   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 43/120
	I0717 01:49:23.069528   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 44/120
	I0717 01:49:24.071553   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 45/120
	I0717 01:49:25.072966   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 46/120
	I0717 01:49:26.074230   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 47/120
	I0717 01:49:27.075514   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 48/120
	I0717 01:49:28.076733   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 49/120
	I0717 01:49:29.078994   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 50/120
	I0717 01:49:30.080281   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 51/120
	I0717 01:49:31.081601   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 52/120
	I0717 01:49:32.083046   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 53/120
	I0717 01:49:33.084388   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 54/120
	I0717 01:49:34.086410   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 55/120
	I0717 01:49:35.087705   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 56/120
	I0717 01:49:36.089365   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 57/120
	I0717 01:49:37.090682   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 58/120
	I0717 01:49:38.091976   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 59/120
	I0717 01:49:39.094185   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 60/120
	I0717 01:49:40.095448   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 61/120
	I0717 01:49:41.096795   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 62/120
	I0717 01:49:42.098133   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 63/120
	I0717 01:49:43.099495   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 64/120
	I0717 01:49:44.101402   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 65/120
	I0717 01:49:45.102735   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 66/120
	I0717 01:49:46.104128   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 67/120
	I0717 01:49:47.105531   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 68/120
	I0717 01:49:48.106834   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 69/120
	I0717 01:49:49.108848   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 70/120
	I0717 01:49:50.110216   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 71/120
	I0717 01:49:51.111760   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 72/120
	I0717 01:49:52.113012   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 73/120
	I0717 01:49:53.114332   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 74/120
	I0717 01:49:54.116387   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 75/120
	I0717 01:49:55.117811   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 76/120
	I0717 01:49:56.119210   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 77/120
	I0717 01:49:57.120565   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 78/120
	I0717 01:49:58.121762   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 79/120
	I0717 01:49:59.123238   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 80/120
	I0717 01:50:00.124569   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 81/120
	I0717 01:50:01.125789   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 82/120
	I0717 01:50:02.127150   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 83/120
	I0717 01:50:03.128321   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 84/120
	I0717 01:50:04.130106   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 85/120
	I0717 01:50:05.131544   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 86/120
	I0717 01:50:06.132729   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 87/120
	I0717 01:50:07.133966   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 88/120
	I0717 01:50:08.135251   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 89/120
	I0717 01:50:09.137274   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 90/120
	I0717 01:50:10.138639   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 91/120
	I0717 01:50:11.140314   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 92/120
	I0717 01:50:12.141629   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 93/120
	I0717 01:50:13.142868   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 94/120
	I0717 01:50:14.144815   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 95/120
	I0717 01:50:15.146151   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 96/120
	I0717 01:50:16.147484   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 97/120
	I0717 01:50:17.148800   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 98/120
	I0717 01:50:18.150012   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 99/120
	I0717 01:50:19.152121   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 100/120
	I0717 01:50:20.153608   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 101/120
	I0717 01:50:21.154670   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 102/120
	I0717 01:50:22.155981   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 103/120
	I0717 01:50:23.158463   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 104/120
	I0717 01:50:24.160599   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 105/120
	I0717 01:50:25.161968   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 106/120
	I0717 01:50:26.163677   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 107/120
	I0717 01:50:27.165042   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 108/120
	I0717 01:50:28.166355   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 109/120
	I0717 01:50:29.168557   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 110/120
	I0717 01:50:30.169963   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 111/120
	I0717 01:50:31.171363   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 112/120
	I0717 01:50:32.173303   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 113/120
	I0717 01:50:33.174691   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 114/120
	I0717 01:50:34.176797   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 115/120
	I0717 01:50:35.178197   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 116/120
	I0717 01:50:36.179644   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 117/120
	I0717 01:50:37.181655   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 118/120
	I0717 01:50:38.183143   70492 main.go:141] libmachine: (no-preload-391501) Waiting for machine to stop 119/120
	I0717 01:50:39.184439   70492 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 01:50:39.184488   70492 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 01:50:39.186503   70492 out.go:177] 
	W0717 01:50:39.187835   70492 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 01:50:39.187848   70492 out.go:239] * 
	* 
	W0717 01:50:39.190454   70492 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 01:50:39.191748   70492 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-391501 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-391501 -n no-preload-391501
E0717 01:50:42.890027   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:50:44.225167   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:44.231331   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-391501 -n no-preload-391501: exit status 3 (18.56137627s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:50:57.755005   71266 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host
	E0717 01:50:57.755025   71266 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-391501" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-940222 -n embed-certs-940222
E0717 01:50:13.504307   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-940222 -n embed-certs-940222: exit status 3 (3.168115771s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:50:13.562927   70889 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.225:22: connect: no route to host
	E0717 01:50:13.562952   70889 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.225:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-940222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0717 01:50:14.901124   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:50:14.906427   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:50:14.916739   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:50:14.937065   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:50:14.977355   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:50:15.057693   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:50:15.218458   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:50:15.538562   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:50:16.178710   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:50:17.180139   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 01:50:17.459093   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-940222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152242764s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.225:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-940222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-940222 -n embed-certs-940222
E0717 01:50:19.983488   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:50:20.020120   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-940222 -n embed-certs-940222: exit status 3 (3.063299186s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:50:22.778978   70970 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.225:22: connect: no route to host
	E0717 01:50:22.779015   70970 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.225:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-940222" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-901761 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-901761 create -f testdata/busybox.yaml: exit status 1 (40.686999ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-901761" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-901761 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 6 (213.043068ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:50:21.009144   71041 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-901761" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 6 (212.011878ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:50:21.221681   71071 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-901761" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (84.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-901761 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-901761 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m24.354978061s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-901761 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-901761 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-901761 describe deploy/metrics-server -n kube-system: exit status 1 (41.577387ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-901761" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-901761 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 6 (216.439574ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:51:45.833184   71799 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-901761" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (84.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184
E0717 01:50:55.861898   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184: exit status 3 (3.164021302s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:50:56.314901   71359 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	E0717 01:50:56.314924   71359 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-738184 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-738184 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152877806s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-738184 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184
E0717 01:51:03.458868   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:51:03.464122   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:51:03.474368   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:51:03.494609   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:51:03.534918   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:51:03.615277   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:51:03.775752   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:51:04.096367   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:51:04.737309   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184: exit status 3 (3.062635775s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:51:05.530874   71492 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	E0717 01:51:05.530899   71492 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-738184" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-391501 -n no-preload-391501
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-391501 -n no-preload-391501: exit status 3 (3.16772147s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:51:00.922894   71428 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host
	E0717 01:51:00.922923   71428 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-391501 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0717 01:51:00.944375   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-391501 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153367179s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-391501 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-391501 -n no-preload-391501
E0717 01:51:08.578192   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-391501 -n no-preload-391501: exit status 3 (3.062564486s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 01:51:10.138968   71573 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host
	E0717 01:51:10.138988   71573 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.174:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-391501" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (742.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-901761 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0717 01:51:58.313017   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:51:58.318253   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:51:58.328460   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:51:58.348721   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:51:58.389052   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:51:58.469375   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:51:58.629810   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:51:58.950391   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:51:59.590608   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:52:00.871583   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:52:03.432042   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:52:08.553026   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:52:18.793229   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:52:22.865273   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:52:25.381884   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:52:39.273443   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:52:47.107053   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:52:58.379951   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 01:52:58.743338   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:52:59.045038   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:53:00.389637   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:53:20.233988   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:53:26.730970   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:53:28.071836   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:53:47.302683   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:54:39.021391   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:54:42.154734   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:55:03.265276   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:55:06.706120   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
E0717 01:55:14.901192   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:55:17.179704   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 01:55:30.948246   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:55:42.583514   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:56:03.458893   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:56:31.142985   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:56:58.312874   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:57:25.995821   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
E0717 01:57:58.379245   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 01:57:59.045369   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 01:58:00.388645   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
E0717 01:59:39.020909   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-901761 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m19.186936294s)

                                                
                                                
-- stdout --
	* [old-k8s-version-901761] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-901761" primary control-plane node in "old-k8s-version-901761" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-901761" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:51:47.395737   71929 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:51:47.396000   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396010   71929 out.go:304] Setting ErrFile to fd 2...
	I0717 01:51:47.396016   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396184   71929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:51:47.396684   71929 out.go:298] Setting JSON to false
	I0717 01:51:47.397549   71929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5649,"bootTime":1721175458,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:51:47.397606   71929 start.go:139] virtualization: kvm guest
	I0717 01:51:47.399758   71929 out.go:177] * [old-k8s-version-901761] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:51:47.400960   71929 notify.go:220] Checking for updates...
	I0717 01:51:47.400966   71929 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:51:47.402266   71929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:51:47.403356   71929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:51:47.404532   71929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:51:47.405524   71929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:51:47.406572   71929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:51:47.407935   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:51:47.408358   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.408427   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.422931   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0717 01:51:47.423315   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.423809   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.423831   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.424123   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.424259   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.426227   71929 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 01:51:47.427500   71929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:51:47.427770   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.427801   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.442080   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I0717 01:51:47.442438   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.442901   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.442924   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.443208   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.443382   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.476327   71929 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:51:47.477607   71929 start.go:297] selected driver: kvm2
	I0717 01:51:47.477620   71929 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.477762   71929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:51:47.478432   71929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.478541   71929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:51:47.493611   71929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:51:47.493967   71929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:51:47.494039   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:51:47.494056   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:51:47.494147   71929 start.go:340] cluster config:
	{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.494271   71929 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.496056   71929 out.go:177] * Starting "old-k8s-version-901761" primary control-plane node in "old-k8s-version-901761" cluster
	I0717 01:51:47.497229   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:51:47.497266   71929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:51:47.497279   71929 cache.go:56] Caching tarball of preloaded images
	I0717 01:51:47.497368   71929 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:51:47.497379   71929 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:51:47.497484   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:51:47.497671   71929 start.go:360] acquireMachinesLock for old-k8s-version-901761: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:55:40.239538   71929 start.go:364] duration metric: took 3m52.741834986s to acquireMachinesLock for "old-k8s-version-901761"
	I0717 01:55:40.239610   71929 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:40.239618   71929 fix.go:54] fixHost starting: 
	I0717 01:55:40.240021   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:40.240054   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:40.257464   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0717 01:55:40.257866   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:40.258287   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:55:40.258311   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:40.258672   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:40.258871   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:40.259041   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetState
	I0717 01:55:40.260529   71929 fix.go:112] recreateIfNeeded on old-k8s-version-901761: state=Stopped err=<nil>
	I0717 01:55:40.260568   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	W0717 01:55:40.260721   71929 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:40.262590   71929 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-901761" ...
	I0717 01:55:40.263857   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .Start
	I0717 01:55:40.264019   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring networks are active...
	I0717 01:55:40.264709   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network default is active
	I0717 01:55:40.265165   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network mk-old-k8s-version-901761 is active
	I0717 01:55:40.265581   71929 main.go:141] libmachine: (old-k8s-version-901761) Getting domain xml...
	I0717 01:55:40.266340   71929 main.go:141] libmachine: (old-k8s-version-901761) Creating domain...
	I0717 01:55:41.562582   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting to get IP...
	I0717 01:55:41.563329   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.563802   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.563890   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.563781   72905 retry.go:31] will retry after 216.264296ms: waiting for machine to come up
	I0717 01:55:41.781168   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.781662   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.781690   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.781629   72905 retry.go:31] will retry after 275.269814ms: waiting for machine to come up
	I0717 01:55:42.058127   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.058525   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.058564   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.058498   72905 retry.go:31] will retry after 348.024497ms: waiting for machine to come up
	I0717 01:55:42.407813   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.408311   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.408346   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.408218   72905 retry.go:31] will retry after 388.717436ms: waiting for machine to come up
	I0717 01:55:42.798810   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.799378   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.799411   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.799323   72905 retry.go:31] will retry after 661.391346ms: waiting for machine to come up
	I0717 01:55:43.462189   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:43.462654   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:43.462686   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:43.462603   72905 retry.go:31] will retry after 636.142497ms: waiting for machine to come up
	I0717 01:55:44.100416   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.100852   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.100874   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.100808   72905 retry.go:31] will retry after 781.652918ms: waiting for machine to come up
	I0717 01:55:44.883650   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.884137   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.884170   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.884088   72905 retry.go:31] will retry after 1.238608293s: waiting for machine to come up
	I0717 01:55:46.124419   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:46.124911   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:46.124942   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:46.124854   72905 retry.go:31] will retry after 1.169011508s: waiting for machine to come up
	I0717 01:55:47.295202   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:47.295679   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:47.295715   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:47.295632   72905 retry.go:31] will retry after 1.723987128s: waiting for machine to come up
	I0717 01:55:49.020883   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:49.021363   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:49.021396   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:49.021279   72905 retry.go:31] will retry after 2.098481296s: waiting for machine to come up
	I0717 01:55:51.121693   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:51.122253   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:51.122282   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:51.122192   72905 retry.go:31] will retry after 2.624839429s: waiting for machine to come up
	I0717 01:55:53.748796   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:53.749348   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:53.749390   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:53.749298   72905 retry.go:31] will retry after 3.47930356s: waiting for machine to come up
	I0717 01:55:57.231901   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232407   71929 main.go:141] libmachine: (old-k8s-version-901761) Found IP for machine: 192.168.50.44
	I0717 01:55:57.232437   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has current primary IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232449   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserving static IP address...
	I0717 01:55:57.232880   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserved static IP address: 192.168.50.44
	I0717 01:55:57.232928   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.232937   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting for SSH to be available...
	I0717 01:55:57.232952   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | skip adding static IP to network mk-old-k8s-version-901761 - found existing host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"}
	I0717 01:55:57.232971   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Getting to WaitForSSH function...
	I0717 01:55:57.235007   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235208   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.235242   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235421   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH client type: external
	I0717 01:55:57.235461   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa (-rw-------)
	I0717 01:55:57.235502   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:57.235516   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | About to run SSH command:
	I0717 01:55:57.235530   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | exit 0
	I0717 01:55:57.362619   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:57.363106   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetConfigRaw
	I0717 01:55:57.363760   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.366213   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366636   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.366666   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366958   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:55:57.367165   71929 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:57.367188   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:57.367392   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.370017   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370354   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.370371   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370577   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.370765   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.370935   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.371084   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.371325   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.371506   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.371518   71929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:57.478893   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:57.478921   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479123   71929 buildroot.go:166] provisioning hostname "old-k8s-version-901761"
	I0717 01:55:57.479142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479330   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.482163   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482531   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.482579   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482739   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.482937   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483111   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483264   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.483454   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.483632   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.483648   71929 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-901761 && echo "old-k8s-version-901761" | sudo tee /etc/hostname
	I0717 01:55:57.613409   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-901761
	
	I0717 01:55:57.613440   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.616228   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616614   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.616655   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616860   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.617040   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617222   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617383   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.617574   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.617778   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.617794   71929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-901761' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-901761/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-901761' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:57.737648   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:57.737683   71929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:57.737703   71929 buildroot.go:174] setting up certificates
	I0717 01:55:57.737711   71929 provision.go:84] configureAuth start
	I0717 01:55:57.737721   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.738028   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.741089   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741532   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.741556   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741741   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.744444   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.744917   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.744947   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.745111   71929 provision.go:143] copyHostCerts
	I0717 01:55:57.745185   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:57.745202   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:57.745273   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:57.745393   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:57.745405   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:57.745437   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:57.745517   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:57.745527   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:57.745545   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:57.745602   71929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-901761 san=[127.0.0.1 192.168.50.44 localhost minikube old-k8s-version-901761]
	I0717 01:55:57.830872   71929 provision.go:177] copyRemoteCerts
	I0717 01:55:57.830939   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:57.830972   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.833463   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833741   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.833777   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833887   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.834083   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.834250   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.834403   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:57.918346   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:57.954250   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:57.979770   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:55:58.005161   71929 provision.go:87] duration metric: took 267.436975ms to configureAuth
	I0717 01:55:58.005193   71929 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:58.005412   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:55:58.005493   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.008255   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008626   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.008663   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008833   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.009006   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009170   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009298   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.009464   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.009616   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.009639   71929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:58.281081   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:58.281112   71929 machine.go:97] duration metric: took 913.933405ms to provisionDockerMachine
	I0717 01:55:58.281121   71929 start.go:293] postStartSetup for "old-k8s-version-901761" (driver="kvm2")
	I0717 01:55:58.281130   71929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:58.281144   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.281497   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:58.281533   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.284465   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.284812   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.284840   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.285023   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.285207   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.285441   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.285650   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.377149   71929 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:58.381709   71929 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:58.381731   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:58.381798   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:58.381887   71929 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:58.381972   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:58.392916   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:58.420677   71929 start.go:296] duration metric: took 139.542186ms for postStartSetup
	I0717 01:55:58.420721   71929 fix.go:56] duration metric: took 18.181102939s for fixHost
	I0717 01:55:58.420745   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.423582   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.423961   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.423989   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.424169   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.424372   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424557   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424693   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.424859   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.425040   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.425053   71929 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 01:55:58.531563   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181358.508735025
	
	I0717 01:55:58.531585   71929 fix.go:216] guest clock: 1721181358.508735025
	I0717 01:55:58.531594   71929 fix.go:229] Guest: 2024-07-17 01:55:58.508735025 +0000 UTC Remote: 2024-07-17 01:55:58.420726806 +0000 UTC m=+251.057483904 (delta=88.008219ms)
	I0717 01:55:58.531617   71929 fix.go:200] guest clock delta is within tolerance: 88.008219ms
	I0717 01:55:58.531624   71929 start.go:83] releasing machines lock for "old-k8s-version-901761", held for 18.292040224s
	I0717 01:55:58.531655   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.531981   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:58.534476   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.534967   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.534996   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.535258   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535802   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535990   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.536105   71929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:58.536183   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.536244   71929 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:58.536275   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.539139   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539401   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539534   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539560   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539768   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.539815   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539845   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539968   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540000   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.540116   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540243   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.540332   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540468   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.628291   71929 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:58.656964   71929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:58.806516   71929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:58.815051   71929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:58.815113   71929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:58.838575   71929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:58.838596   71929 start.go:495] detecting cgroup driver to use...
	I0717 01:55:58.838662   71929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:58.855728   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:58.875221   71929 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:58.875285   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:58.889781   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:58.903832   71929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:59.026815   71929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:59.173879   71929 docker.go:233] disabling docker service ...
	I0717 01:55:59.173964   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:59.192906   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:59.208262   71929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:59.368178   71929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:59.500335   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:59.514795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:59.535553   71929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:55:59.535631   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.548304   71929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:59.548376   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.563066   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.578452   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.593447   71929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:59.606239   71929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:59.617051   71929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:59.617118   71929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:59.632601   71929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:59.645034   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:59.812343   71929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:59.969366   71929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:59.969444   71929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:59.974286   71929 start.go:563] Will wait 60s for crictl version
	I0717 01:55:59.974335   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:55:59.978280   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:00.020399   71929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:00.020489   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.049811   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.081952   71929 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:56:00.083384   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:56:00.086085   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086454   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:56:00.086494   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086710   71929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:00.091322   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:00.104102   71929 kubeadm.go:883] updating cluster {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:00.104237   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:56:00.104309   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:00.152445   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:00.152537   71929 ssh_runner.go:195] Run: which lz4
	I0717 01:56:00.156760   71929 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 01:56:00.161123   71929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:00.161149   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:56:02.031804   71929 crio.go:462] duration metric: took 1.875087246s to copy over tarball
	I0717 01:56:02.031904   71929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:56:05.092637   71929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060698331s)
	I0717 01:56:05.092674   71929 crio.go:469] duration metric: took 3.060839356s to extract the tarball
	I0717 01:56:05.092682   71929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:05.135461   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:05.170789   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:05.170814   71929 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:56:05.170853   71929 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.170884   71929 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.170908   71929 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.170961   71929 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:56:05.171077   71929 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.171126   71929 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.171138   71929 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.171462   71929 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172182   71929 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:56:05.172224   71929 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.172296   71929 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.172362   71929 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172415   71929 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.172449   71929 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.372794   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415131   71929 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:56:05.415181   71929 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415231   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.419179   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.446530   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:56:05.452583   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:56:05.485692   71929 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:56:05.485734   71929 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:56:05.485780   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.486154   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.487346   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.489408   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.490486   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:56:05.494929   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.499420   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.593505   71929 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:56:05.593587   71929 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.593638   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.632564   71929 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:56:05.632615   71929 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.632667   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657745   71929 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:56:05.657792   71929 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.657852   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657863   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:56:05.657908   71929 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:56:05.657943   71929 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.657958   71929 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:56:05.657976   71929 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.657980   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658004   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658037   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.658077   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.671679   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.671708   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.736572   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:56:05.736599   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:56:05.736671   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.758178   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:56:05.758210   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:56:05.787948   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:56:06.882199   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:07.025117   71929 cache_images.go:92] duration metric: took 1.854284265s to LoadCachedImages
	W0717 01:56:07.025227   71929 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0717 01:56:07.025245   71929 kubeadm.go:934] updating node { 192.168.50.44 8443 v1.20.0 crio true true} ...
	I0717 01:56:07.025378   71929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-901761 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:07.025465   71929 ssh_runner.go:195] Run: crio config
	I0717 01:56:07.081517   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:56:07.081543   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:07.081560   71929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:07.081584   71929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.44 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901761 NodeName:old-k8s-version-901761 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:56:07.081749   71929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-901761"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:07.081833   71929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:56:07.092233   71929 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:07.092335   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:07.102086   71929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0717 01:56:07.121538   71929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:07.139112   71929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0717 01:56:07.157397   71929 ssh_runner.go:195] Run: grep 192.168.50.44	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:07.161818   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.44	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:07.174723   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:07.307484   71929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:07.325948   71929 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761 for IP: 192.168.50.44
	I0717 01:56:07.325974   71929 certs.go:194] generating shared ca certs ...
	I0717 01:56:07.326002   71929 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.326164   71929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:07.326216   71929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:07.326229   71929 certs.go:256] generating profile certs ...
	I0717 01:56:07.326351   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.key
	I0717 01:56:07.326416   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5
	I0717 01:56:07.326461   71929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key
	I0717 01:56:07.326630   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:07.326668   71929 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:07.326681   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:07.326700   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:07.326724   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:07.326767   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:07.326828   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:07.327702   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:07.377671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:07.413171   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:07.443671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:07.482883   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:56:07.527280   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:07.571200   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:07.612296   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:07.638012   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:07.662018   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:07.688033   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:07.721827   71929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:07.741517   71929 ssh_runner.go:195] Run: openssl version
	I0717 01:56:07.747466   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:07.758615   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763382   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763439   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.769358   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:07.781802   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:07.792763   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797629   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797681   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.803879   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:07.815479   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:07.828292   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832769   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832829   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.838958   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:07.850108   71929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:07.854758   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:07.860661   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:07.866484   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:07.872302   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:07.878252   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:07.884275   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:07.890148   71929 kubeadm.go:392] StartCluster: {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:07.890264   71929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:07.890343   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:07.930081   71929 cri.go:89] found id: ""
	I0717 01:56:07.930153   71929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:07.941371   71929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:07.941396   71929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:07.941445   71929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:07.955229   71929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:07.957263   71929 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:07.959002   71929 kubeconfig.go:62] /home/jenkins/minikube-integration/19264-3908/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-901761" cluster setting kubeconfig missing "old-k8s-version-901761" context setting]
	I0717 01:56:07.960384   71929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.962748   71929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:07.973815   71929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.44
	I0717 01:56:07.973851   71929 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:07.973864   71929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:07.973933   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:08.020169   71929 cri.go:89] found id: ""
	I0717 01:56:08.020247   71929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:08.038015   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:08.049272   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:08.049294   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:08.049336   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:08.058953   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:08.059025   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:08.069034   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:08.078748   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:08.078817   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:08.089660   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.099521   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:08.099583   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.109831   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:08.120340   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:08.120400   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:08.130884   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:08.141008   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:08.275189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.006841   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.255401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.376659   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.475840   71929 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:09.475937   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:09.976926   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.476192   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.976705   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.476386   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.976459   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:12.476819   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:12.976633   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.476076   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.476885   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.976972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.476823   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.976917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.476765   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.976609   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:17.476562   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:17.976663   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.476958   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.976722   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.476641   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.976079   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.476899   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.976553   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.476087   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.976659   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:22.475994   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:22.976928   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.476906   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.975980   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.476208   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.976090   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.476425   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.976072   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.976180   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.476313   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.976700   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.476585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.976008   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.477040   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.976892   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.476912   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.976626   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.476786   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.976148   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:32.475986   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:32.976812   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.476601   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.976667   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.476897   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.976610   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.476444   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.976859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.476092   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.976979   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:37.476813   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:37.976779   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.476554   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.976791   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.476946   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.976044   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.476526   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.976315   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.476688   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.976203   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:42.476301   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:42.976939   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.477021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.976910   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.476766   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.976415   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.476987   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.976666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.476735   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.976643   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:47.476576   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:47.976502   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.476634   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.976299   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.476069   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.976086   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.476859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.976441   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.476217   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.976585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:52.476652   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:52.976136   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.976168   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.476176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.976049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.476464   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.976802   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:57.476661   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:57.976021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.976940   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.476773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.976397   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.476591   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.976189   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.476917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.976263   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:02.476048   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:02.976019   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.476604   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.976602   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.477004   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.976726   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.476934   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.975985   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.476331   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.976185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:07.476887   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:07.975972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.476034   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.976678   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:09.476927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:09.477010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:09.513328   71929 cri.go:89] found id: ""
	I0717 01:57:09.513352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.513361   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:09.513368   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:09.513418   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:09.551203   71929 cri.go:89] found id: ""
	I0717 01:57:09.551228   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.551237   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:09.551244   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:09.551308   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:09.585321   71929 cri.go:89] found id: ""
	I0717 01:57:09.585352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.585363   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:09.585370   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:09.585427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:09.623977   71929 cri.go:89] found id: ""
	I0717 01:57:09.624004   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.624012   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:09.624019   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:09.624078   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:09.663338   71929 cri.go:89] found id: ""
	I0717 01:57:09.663367   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.663374   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:09.663380   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:09.663425   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:09.696381   71929 cri.go:89] found id: ""
	I0717 01:57:09.696412   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.696423   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:09.696436   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:09.696482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:09.735892   71929 cri.go:89] found id: ""
	I0717 01:57:09.735922   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.735932   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:09.735944   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:09.736006   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:09.775878   71929 cri.go:89] found id: ""
	I0717 01:57:09.775909   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.775919   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:09.775929   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:09.775942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:09.830021   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:09.830057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:09.844753   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:09.844783   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:09.985140   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:09.985165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:09.985179   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:10.049946   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:10.049984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:12.592959   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:12.608385   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:12.608467   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:12.649900   71929 cri.go:89] found id: ""
	I0717 01:57:12.649931   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.649942   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:12.649950   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:12.650021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:12.684915   71929 cri.go:89] found id: ""
	I0717 01:57:12.684941   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.684948   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:12.684956   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:12.685010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:12.727718   71929 cri.go:89] found id: ""
	I0717 01:57:12.727758   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.727766   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:12.727788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:12.727864   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:12.767212   71929 cri.go:89] found id: ""
	I0717 01:57:12.767236   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.767244   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:12.767249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:12.767295   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:12.806301   71929 cri.go:89] found id: ""
	I0717 01:57:12.806320   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.806327   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:12.806332   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:12.806405   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:12.843118   71929 cri.go:89] found id: ""
	I0717 01:57:12.843151   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.843162   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:12.843170   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:12.843245   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:12.876671   71929 cri.go:89] found id: ""
	I0717 01:57:12.876697   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.876707   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:12.876714   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:12.876790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:12.916201   71929 cri.go:89] found id: ""
	I0717 01:57:12.916226   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.916232   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:12.916240   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:12.916250   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:12.970346   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:12.970385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:12.985029   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:12.985053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:13.068314   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:13.068340   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:13.068352   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:13.147862   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:13.147897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:15.703130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:15.717081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:15.717160   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:15.757513   71929 cri.go:89] found id: ""
	I0717 01:57:15.757538   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.757545   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:15.757552   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:15.757599   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:15.794185   71929 cri.go:89] found id: ""
	I0717 01:57:15.794218   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.794231   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:15.794238   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:15.794300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:15.830589   71929 cri.go:89] found id: ""
	I0717 01:57:15.830619   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.830628   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:15.830634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:15.830694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:15.869673   71929 cri.go:89] found id: ""
	I0717 01:57:15.869702   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.869713   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:15.869720   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:15.869782   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:15.909225   71929 cri.go:89] found id: ""
	I0717 01:57:15.909257   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.909267   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:15.909278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:15.909343   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:15.944389   71929 cri.go:89] found id: ""
	I0717 01:57:15.944417   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.944424   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:15.944430   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:15.944490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:15.982871   71929 cri.go:89] found id: ""
	I0717 01:57:15.982898   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.982907   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:15.982915   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:15.982983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:16.025674   71929 cri.go:89] found id: ""
	I0717 01:57:16.025701   71929 logs.go:276] 0 containers: []
	W0717 01:57:16.025711   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:16.025721   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:16.025736   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:16.111608   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:16.111627   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:16.111638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:16.184650   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:16.184689   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:16.230647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:16.230693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:16.286675   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:16.286710   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:18.802487   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:18.817483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:18.817562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:18.861623   71929 cri.go:89] found id: ""
	I0717 01:57:18.861653   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.861664   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:18.861671   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:18.861733   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:18.901335   71929 cri.go:89] found id: ""
	I0717 01:57:18.901359   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.901367   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:18.901372   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:18.901427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:18.936477   71929 cri.go:89] found id: ""
	I0717 01:57:18.936508   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.936518   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:18.936524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:18.936581   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:18.971056   71929 cri.go:89] found id: ""
	I0717 01:57:18.971087   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.971098   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:18.971106   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:18.971157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:19.005399   71929 cri.go:89] found id: ""
	I0717 01:57:19.005431   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.005453   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:19.005460   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:19.005525   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:19.040218   71929 cri.go:89] found id: ""
	I0717 01:57:19.040242   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.040250   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:19.040257   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:19.040317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:19.073365   71929 cri.go:89] found id: ""
	I0717 01:57:19.073392   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.073402   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:19.073409   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:19.073471   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:19.108670   71929 cri.go:89] found id: ""
	I0717 01:57:19.108701   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.108713   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:19.108725   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:19.108743   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:19.186077   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:19.186111   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.232181   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:19.232214   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:19.288713   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:19.288755   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:19.303089   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:19.303115   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:19.386372   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:21.886666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:21.900905   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:21.900966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:21.934955   71929 cri.go:89] found id: ""
	I0717 01:57:21.934979   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.934987   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:21.934993   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:21.935036   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:21.972180   71929 cri.go:89] found id: ""
	I0717 01:57:21.972203   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.972211   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:21.972217   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:21.972271   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:22.010452   71929 cri.go:89] found id: ""
	I0717 01:57:22.010479   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.010487   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:22.010493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:22.010547   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:22.045824   71929 cri.go:89] found id: ""
	I0717 01:57:22.045888   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.045902   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:22.045911   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:22.045984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:22.084734   71929 cri.go:89] found id: ""
	I0717 01:57:22.084760   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.084769   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:22.084774   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:22.084842   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:22.119808   71929 cri.go:89] found id: ""
	I0717 01:57:22.119838   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.119846   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:22.119852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:22.119910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:22.157537   71929 cri.go:89] found id: ""
	I0717 01:57:22.157583   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.157610   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:22.157620   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:22.157687   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:22.196021   71929 cri.go:89] found id: ""
	I0717 01:57:22.196052   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.196062   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:22.196079   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:22.196094   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:22.274350   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:22.274373   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:22.274386   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:22.364363   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:22.364401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:22.410052   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:22.410092   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:22.462289   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:22.462326   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.978560   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:24.992533   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:24.992601   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:25.027708   71929 cri.go:89] found id: ""
	I0717 01:57:25.027746   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.027754   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:25.027760   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:25.027809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:25.066946   71929 cri.go:89] found id: ""
	I0717 01:57:25.066974   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.066985   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:25.066992   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:25.067051   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:25.107209   71929 cri.go:89] found id: ""
	I0717 01:57:25.107238   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.107248   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:25.107254   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:25.107300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:25.141548   71929 cri.go:89] found id: ""
	I0717 01:57:25.141577   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.141587   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:25.141594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:25.141652   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:25.175822   71929 cri.go:89] found id: ""
	I0717 01:57:25.175853   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.175861   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:25.175866   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:25.175917   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:25.215672   71929 cri.go:89] found id: ""
	I0717 01:57:25.215705   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.215718   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:25.215726   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:25.215786   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:25.260392   71929 cri.go:89] found id: ""
	I0717 01:57:25.260422   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.260434   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:25.260442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:25.260510   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:25.309953   71929 cri.go:89] found id: ""
	I0717 01:57:25.309981   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.309990   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:25.309999   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:25.310013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:25.414204   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:25.414229   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:25.414244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:25.501849   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:25.501883   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:25.545129   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:25.545163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:25.599948   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:25.599984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:28.115776   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:28.129710   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:28.129776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:28.165380   71929 cri.go:89] found id: ""
	I0717 01:57:28.165409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.165419   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:28.165425   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:28.165473   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:28.199225   71929 cri.go:89] found id: ""
	I0717 01:57:28.199251   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.199259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:28.199264   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:28.199314   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:28.235564   71929 cri.go:89] found id: ""
	I0717 01:57:28.235585   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.235593   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:28.235598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:28.235649   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:28.270377   71929 cri.go:89] found id: ""
	I0717 01:57:28.270409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.270427   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:28.270435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:28.270488   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:28.310132   71929 cri.go:89] found id: ""
	I0717 01:57:28.310156   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.310163   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:28.310168   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:28.310222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:28.347590   71929 cri.go:89] found id: ""
	I0717 01:57:28.347619   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.347630   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:28.347638   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:28.347696   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:28.387953   71929 cri.go:89] found id: ""
	I0717 01:57:28.387988   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.388001   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:28.388010   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:28.388072   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:28.428788   71929 cri.go:89] found id: ""
	I0717 01:57:28.428811   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.428818   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:28.428826   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:28.428838   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:28.487411   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:28.487465   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:28.501121   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:28.501152   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:28.576296   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:28.576320   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:28.576335   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:28.660246   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:28.660288   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:31.201238   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:31.221132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:31.221192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:31.279839   71929 cri.go:89] found id: ""
	I0717 01:57:31.279867   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.279876   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:31.279884   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:31.279943   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:31.359764   71929 cri.go:89] found id: ""
	I0717 01:57:31.359796   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.359807   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:31.359814   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:31.359873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:31.397045   71929 cri.go:89] found id: ""
	I0717 01:57:31.397077   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.397087   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:31.397094   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:31.397157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:31.441356   71929 cri.go:89] found id: ""
	I0717 01:57:31.441388   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.441397   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:31.441404   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:31.441459   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:31.484014   71929 cri.go:89] found id: ""
	I0717 01:57:31.484040   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.484053   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:31.484060   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:31.484124   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:31.520686   71929 cri.go:89] found id: ""
	I0717 01:57:31.520714   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.520725   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:31.520733   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:31.520792   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:31.557300   71929 cri.go:89] found id: ""
	I0717 01:57:31.557326   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.557334   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:31.557339   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:31.557387   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:31.597753   71929 cri.go:89] found id: ""
	I0717 01:57:31.597782   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.597792   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:31.597804   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:31.597818   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:31.656796   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:31.656837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:31.671287   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:31.671311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:31.742752   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:31.742772   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:31.742784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:31.828154   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:31.828186   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:34.368947   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:34.384323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:34.384402   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:34.421138   71929 cri.go:89] found id: ""
	I0717 01:57:34.421171   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.421182   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:34.421190   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:34.421263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:34.459077   71929 cri.go:89] found id: ""
	I0717 01:57:34.459105   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.459116   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:34.459123   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:34.459180   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:34.492987   71929 cri.go:89] found id: ""
	I0717 01:57:34.493016   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.493027   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:34.493038   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:34.493098   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:34.527801   71929 cri.go:89] found id: ""
	I0717 01:57:34.527827   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.527836   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:34.527841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:34.527890   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:34.562877   71929 cri.go:89] found id: ""
	I0717 01:57:34.562904   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.562914   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:34.562921   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:34.562981   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:34.599387   71929 cri.go:89] found id: ""
	I0717 01:57:34.599409   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.599417   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:34.599423   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:34.599479   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:34.636087   71929 cri.go:89] found id: ""
	I0717 01:57:34.636118   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.636126   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:34.636132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:34.636194   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:34.673168   71929 cri.go:89] found id: ""
	I0717 01:57:34.673196   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.673206   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:34.673214   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:34.673226   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:34.712833   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:34.712864   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:34.765926   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:34.765959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:34.780024   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:34.780049   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:34.863080   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:34.863106   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:34.863122   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:37.446644   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:37.463015   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:37.463090   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:37.499563   71929 cri.go:89] found id: ""
	I0717 01:57:37.499592   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.499601   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:37.499607   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:37.499663   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:37.538516   71929 cri.go:89] found id: ""
	I0717 01:57:37.538543   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.538572   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:37.538579   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:37.538638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:37.577032   71929 cri.go:89] found id: ""
	I0717 01:57:37.577061   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.577068   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:37.577074   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:37.577129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:37.613534   71929 cri.go:89] found id: ""
	I0717 01:57:37.613563   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.613574   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:37.613582   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:37.613646   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:37.651346   71929 cri.go:89] found id: ""
	I0717 01:57:37.651370   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.651381   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:37.651389   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:37.651451   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:37.685949   71929 cri.go:89] found id: ""
	I0717 01:57:37.685989   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.686001   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:37.686008   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:37.686068   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:37.721706   71929 cri.go:89] found id: ""
	I0717 01:57:37.721744   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.721752   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:37.721759   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:37.721812   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:37.758948   71929 cri.go:89] found id: ""
	I0717 01:57:37.758976   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.758985   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:37.758994   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:37.759005   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:37.835305   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:37.835334   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:37.835349   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:37.916627   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:37.916660   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:37.956819   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:37.956851   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.007596   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:38.007641   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.522573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:40.536850   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:40.536924   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:40.576172   71929 cri.go:89] found id: ""
	I0717 01:57:40.576200   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.576211   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:40.576218   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:40.576277   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:40.611926   71929 cri.go:89] found id: ""
	I0717 01:57:40.611958   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.611969   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:40.611976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:40.612039   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:40.647225   71929 cri.go:89] found id: ""
	I0717 01:57:40.647251   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.647259   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:40.647265   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:40.647315   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:40.683871   71929 cri.go:89] found id: ""
	I0717 01:57:40.683902   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.683917   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:40.683925   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:40.683999   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:40.720941   71929 cri.go:89] found id: ""
	I0717 01:57:40.720971   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.720982   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:40.720989   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:40.721053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:40.756695   71929 cri.go:89] found id: ""
	I0717 01:57:40.756728   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.756739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:40.756746   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:40.756801   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:40.794181   71929 cri.go:89] found id: ""
	I0717 01:57:40.794214   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.794221   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:40.794226   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:40.794281   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:40.830361   71929 cri.go:89] found id: ""
	I0717 01:57:40.830396   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.830407   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:40.830417   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:40.830436   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.844827   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:40.844849   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:40.913003   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:40.913021   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:40.913035   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:40.996314   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:40.996348   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:41.041120   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:41.041151   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:43.593226   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:43.606395   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:43.606461   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:43.646260   71929 cri.go:89] found id: ""
	I0717 01:57:43.646290   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.646302   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:43.646310   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:43.646368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:43.681148   71929 cri.go:89] found id: ""
	I0717 01:57:43.681174   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.681182   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:43.681189   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:43.681250   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:43.716568   71929 cri.go:89] found id: ""
	I0717 01:57:43.716595   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.716606   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:43.716613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:43.716675   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:43.750507   71929 cri.go:89] found id: ""
	I0717 01:57:43.750536   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.750558   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:43.750566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:43.750627   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:43.787207   71929 cri.go:89] found id: ""
	I0717 01:57:43.787234   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.787244   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:43.787251   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:43.787311   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:43.822997   71929 cri.go:89] found id: ""
	I0717 01:57:43.823034   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.823045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:43.823052   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:43.823118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:43.860605   71929 cri.go:89] found id: ""
	I0717 01:57:43.860632   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.860640   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:43.860646   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:43.860702   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:43.897419   71929 cri.go:89] found id: ""
	I0717 01:57:43.897451   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.897463   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:43.897473   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:43.897492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:43.956361   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:43.956393   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:43.971077   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:43.971104   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:44.045234   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:44.045258   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:44.045275   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:44.122508   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:44.122544   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:46.660516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:46.675555   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:46.675651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:46.709264   71929 cri.go:89] found id: ""
	I0717 01:57:46.709291   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.709300   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:46.709306   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:46.709362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:46.744865   71929 cri.go:89] found id: ""
	I0717 01:57:46.744898   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.744908   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:46.744915   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:46.744971   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:46.785837   71929 cri.go:89] found id: ""
	I0717 01:57:46.785860   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.785870   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:46.785878   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:46.785932   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:46.828801   71929 cri.go:89] found id: ""
	I0717 01:57:46.828832   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.828842   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:46.828849   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:46.828907   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:46.863122   71929 cri.go:89] found id: ""
	I0717 01:57:46.863151   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.863162   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:46.863175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:46.863232   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:46.900705   71929 cri.go:89] found id: ""
	I0717 01:57:46.900731   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.900739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:46.900744   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:46.900790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:46.935774   71929 cri.go:89] found id: ""
	I0717 01:57:46.935816   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.935829   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:46.935840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:46.935895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:46.969274   71929 cri.go:89] found id: ""
	I0717 01:57:46.969304   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.969315   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:46.969325   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:46.969339   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:47.040318   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:47.040343   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:47.040358   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:47.119920   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:47.119954   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:47.168818   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:47.168847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:47.221983   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:47.222034   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:49.736564   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:49.749966   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:49.750025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:49.788294   71929 cri.go:89] found id: ""
	I0717 01:57:49.788321   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.788332   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:49.788339   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:49.788396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:49.826406   71929 cri.go:89] found id: ""
	I0717 01:57:49.826431   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.826440   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:49.826445   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:49.826491   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:49.864978   71929 cri.go:89] found id: ""
	I0717 01:57:49.865005   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.865015   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:49.865020   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:49.865074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:49.901238   71929 cri.go:89] found id: ""
	I0717 01:57:49.901270   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.901281   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:49.901300   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:49.901366   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:49.937035   71929 cri.go:89] found id: ""
	I0717 01:57:49.937058   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.937065   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:49.937070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:49.937207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:49.977793   71929 cri.go:89] found id: ""
	I0717 01:57:49.977816   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.977823   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:49.977828   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:49.977873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:50.012915   71929 cri.go:89] found id: ""
	I0717 01:57:50.012942   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.012952   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:50.012959   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:50.013025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:50.049085   71929 cri.go:89] found id: ""
	I0717 01:57:50.049115   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.049127   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:50.049138   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:50.049156   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:50.087521   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:50.087549   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:50.140934   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:50.140978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:50.156001   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:50.156033   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:50.231780   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:50.231811   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:50.231835   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:52.810064   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:52.823442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:52.823508   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:52.860753   71929 cri.go:89] found id: ""
	I0717 01:57:52.860778   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.860789   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:52.860797   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:52.860852   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:52.896264   71929 cri.go:89] found id: ""
	I0717 01:57:52.896289   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.896297   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:52.896303   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:52.896349   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:52.932613   71929 cri.go:89] found id: ""
	I0717 01:57:52.932640   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.932649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:52.932657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:52.932722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:52.969691   71929 cri.go:89] found id: ""
	I0717 01:57:52.969720   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.969728   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:52.969734   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:52.969788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:53.007039   71929 cri.go:89] found id: ""
	I0717 01:57:53.007067   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.007075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:53.007081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:53.007135   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:53.047736   71929 cri.go:89] found id: ""
	I0717 01:57:53.047762   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.047772   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:53.047778   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:53.047838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:53.083192   71929 cri.go:89] found id: ""
	I0717 01:57:53.083216   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.083225   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:53.083230   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:53.083276   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:53.118509   71929 cri.go:89] found id: ""
	I0717 01:57:53.118536   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.118545   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:53.118564   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:53.118589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:53.203003   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:53.203039   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:53.244602   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:53.244627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:53.295180   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:53.295216   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:53.310777   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:53.310805   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:53.389412   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:55.890450   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:55.903768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:55.903843   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:55.944148   71929 cri.go:89] found id: ""
	I0717 01:57:55.944171   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.944179   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:55.944185   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:55.944231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:55.979945   71929 cri.go:89] found id: ""
	I0717 01:57:55.979970   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.979980   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:55.979987   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:55.980045   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:56.019057   71929 cri.go:89] found id: ""
	I0717 01:57:56.019089   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.019100   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:56.019107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:56.019162   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:56.054343   71929 cri.go:89] found id: ""
	I0717 01:57:56.054369   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.054378   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:56.054383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:56.054434   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:56.091150   71929 cri.go:89] found id: ""
	I0717 01:57:56.091179   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.091189   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:56.091197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:56.091256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:56.127502   71929 cri.go:89] found id: ""
	I0717 01:57:56.127528   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.127538   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:56.127547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:56.127602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:56.167935   71929 cri.go:89] found id: ""
	I0717 01:57:56.167961   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.167972   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:56.167979   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:56.168048   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:56.209501   71929 cri.go:89] found id: ""
	I0717 01:57:56.209527   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.209537   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:56.209547   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:56.209561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:56.257989   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:56.258023   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:56.272491   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:56.272519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:56.361622   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:56.361653   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:56.361668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:56.442953   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:56.442992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:58.983914   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:58.997215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:58.997292   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:59.032937   71929 cri.go:89] found id: ""
	I0717 01:57:59.032964   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.032980   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:59.032996   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:59.033057   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:59.067790   71929 cri.go:89] found id: ""
	I0717 01:57:59.067811   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.067819   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:59.067825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:59.067881   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:59.107659   71929 cri.go:89] found id: ""
	I0717 01:57:59.107689   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.107699   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:59.107705   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:59.107754   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:59.150134   71929 cri.go:89] found id: ""
	I0717 01:57:59.150158   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.150168   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:59.150175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:59.150235   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:59.192351   71929 cri.go:89] found id: ""
	I0717 01:57:59.192381   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.192391   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:59.192398   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:59.192460   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:59.228177   71929 cri.go:89] found id: ""
	I0717 01:57:59.228202   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.228209   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:59.228215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:59.228261   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:59.267016   71929 cri.go:89] found id: ""
	I0717 01:57:59.267043   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.267052   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:59.267058   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:59.267109   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:59.302235   71929 cri.go:89] found id: ""
	I0717 01:57:59.302257   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.302263   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:59.302273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:59.302285   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:59.368453   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:59.368492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:59.383375   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:59.383399   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:59.454946   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:59.454975   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:59.454992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:59.539576   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:59.539609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:02.085516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:02.099848   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:02.099909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:02.136835   71929 cri.go:89] found id: ""
	I0717 01:58:02.136859   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.136867   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:02.136872   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:02.136928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:02.175304   71929 cri.go:89] found id: ""
	I0717 01:58:02.175331   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.175338   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:02.175344   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:02.175389   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:02.210922   71929 cri.go:89] found id: ""
	I0717 01:58:02.210947   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.210955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:02.210961   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:02.211018   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:02.246952   71929 cri.go:89] found id: ""
	I0717 01:58:02.246983   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.246992   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:02.246999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:02.247053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:02.284857   71929 cri.go:89] found id: ""
	I0717 01:58:02.284883   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.284892   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:02.284897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:02.284944   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:02.322941   71929 cri.go:89] found id: ""
	I0717 01:58:02.322978   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.322999   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:02.323007   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:02.323065   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:02.357904   71929 cri.go:89] found id: ""
	I0717 01:58:02.357932   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.357943   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:02.357950   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:02.358012   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:02.392291   71929 cri.go:89] found id: ""
	I0717 01:58:02.392315   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.392322   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:02.392331   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:02.392346   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:02.447670   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:02.447704   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:02.462259   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:02.462284   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:02.534304   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:02.534332   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:02.534347   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:02.612757   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:02.612799   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.153573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:05.166702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:05.166775   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:05.205213   71929 cri.go:89] found id: ""
	I0717 01:58:05.205238   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.205247   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:05.205252   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:05.205305   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:05.242021   71929 cri.go:89] found id: ""
	I0717 01:58:05.242048   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.242057   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:05.242063   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:05.242118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:05.281862   71929 cri.go:89] found id: ""
	I0717 01:58:05.281889   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.281900   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:05.281908   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:05.281967   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:05.318125   71929 cri.go:89] found id: ""
	I0717 01:58:05.318157   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.318169   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:05.318177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:05.318244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:05.352470   71929 cri.go:89] found id: ""
	I0717 01:58:05.352504   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.352516   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:05.352524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:05.352595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:05.386692   71929 cri.go:89] found id: ""
	I0717 01:58:05.386722   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.386733   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:05.386741   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:05.386803   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:05.426676   71929 cri.go:89] found id: ""
	I0717 01:58:05.426731   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.426744   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:05.426751   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:05.426811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:05.467974   71929 cri.go:89] found id: ""
	I0717 01:58:05.468000   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.468010   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:05.468020   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:05.468036   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.506769   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:05.506797   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:05.561745   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:05.561782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:05.576743   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:05.576775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:05.652856   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:05.652887   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:05.652903   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:08.244185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:08.257343   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:08.257420   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:08.297136   71929 cri.go:89] found id: ""
	I0717 01:58:08.297163   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.297174   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:08.297181   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:08.297237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:08.336099   71929 cri.go:89] found id: ""
	I0717 01:58:08.336121   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.336129   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:08.336135   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:08.336185   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:08.369668   71929 cri.go:89] found id: ""
	I0717 01:58:08.369690   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.369698   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:08.369706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:08.369756   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:08.405140   71929 cri.go:89] found id: ""
	I0717 01:58:08.405171   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.405179   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:08.405186   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:08.405249   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:08.446296   71929 cri.go:89] found id: ""
	I0717 01:58:08.446319   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.446326   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:08.446331   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:08.446377   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:08.483004   71929 cri.go:89] found id: ""
	I0717 01:58:08.483042   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.483062   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:08.483070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:08.483139   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:08.520668   71929 cri.go:89] found id: ""
	I0717 01:58:08.520699   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.520710   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:08.520717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:08.520776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:08.554711   71929 cri.go:89] found id: ""
	I0717 01:58:08.554734   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.554744   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:08.554752   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:08.554763   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:08.606972   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:08.607004   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:08.621102   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:08.621134   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:08.690424   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:08.690443   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:08.690454   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:08.775151   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:08.775193   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:11.318471   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:11.331875   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:11.331954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:11.375766   71929 cri.go:89] found id: ""
	I0717 01:58:11.375787   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.375795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:11.375801   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:11.375863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:11.417043   71929 cri.go:89] found id: ""
	I0717 01:58:11.417080   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.417103   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:11.417111   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:11.417169   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:11.459462   71929 cri.go:89] found id: ""
	I0717 01:58:11.459487   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.459495   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:11.459500   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:11.459551   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:11.516500   71929 cri.go:89] found id: ""
	I0717 01:58:11.516525   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.516533   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:11.516539   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:11.516590   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:11.573916   71929 cri.go:89] found id: ""
	I0717 01:58:11.573961   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.575159   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:11.575201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:11.575275   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:11.619446   71929 cri.go:89] found id: ""
	I0717 01:58:11.619477   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.619489   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:11.619497   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:11.619558   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:11.654766   71929 cri.go:89] found id: ""
	I0717 01:58:11.654793   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.654802   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:11.654807   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:11.654859   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:11.690306   71929 cri.go:89] found id: ""
	I0717 01:58:11.690335   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.690346   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:11.690354   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:11.690366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:11.744470   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:11.744516   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:11.758824   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:11.758856   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:11.841028   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:11.841058   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:11.841076   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:11.923299   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:11.923351   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:14.466666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:14.479676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:14.479740   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:14.517890   71929 cri.go:89] found id: ""
	I0717 01:58:14.517919   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.517931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:14.517938   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:14.517998   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:14.552891   71929 cri.go:89] found id: ""
	I0717 01:58:14.552918   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.552926   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:14.552931   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:14.552992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:14.593571   71929 cri.go:89] found id: ""
	I0717 01:58:14.593596   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.593604   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:14.593609   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:14.593662   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:14.628869   71929 cri.go:89] found id: ""
	I0717 01:58:14.628897   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.628907   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:14.628913   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:14.628972   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:14.663558   71929 cri.go:89] found id: ""
	I0717 01:58:14.663586   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.663593   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:14.663599   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:14.663644   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:14.700788   71929 cri.go:89] found id: ""
	I0717 01:58:14.700824   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.700834   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:14.700843   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:14.700903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:14.737975   71929 cri.go:89] found id: ""
	I0717 01:58:14.738014   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.738025   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:14.738032   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:14.738091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:14.775419   71929 cri.go:89] found id: ""
	I0717 01:58:14.775443   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.775453   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:14.775465   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:14.775479   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:14.817635   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:14.817661   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:14.870667   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:14.870705   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:14.885208   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:14.885235   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:14.962286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:14.962318   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:14.962334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:17.537546   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:17.550258   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:17.550322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:17.586251   71929 cri.go:89] found id: ""
	I0717 01:58:17.586278   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.586286   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:17.586292   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:17.586348   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:17.620903   71929 cri.go:89] found id: ""
	I0717 01:58:17.620927   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.620935   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:17.620941   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:17.620992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:17.659292   71929 cri.go:89] found id: ""
	I0717 01:58:17.659319   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.659328   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:17.659334   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:17.659384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:17.695603   71929 cri.go:89] found id: ""
	I0717 01:58:17.695632   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.695642   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:17.695650   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:17.695711   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:17.731943   71929 cri.go:89] found id: ""
	I0717 01:58:17.731970   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.731978   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:17.731984   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:17.732041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:17.767257   71929 cri.go:89] found id: ""
	I0717 01:58:17.767284   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.767293   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:17.767299   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:17.767357   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:17.802455   71929 cri.go:89] found id: ""
	I0717 01:58:17.802495   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.802508   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:17.802516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:17.802602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:17.839321   71929 cri.go:89] found id: ""
	I0717 01:58:17.839351   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.839362   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:17.839374   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:17.839391   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:17.912269   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:17.912295   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:17.912311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:17.990005   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:17.990038   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:18.029933   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:18.029960   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:18.081941   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:18.081977   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:20.597325   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:20.611835   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:20.611901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:20.647899   71929 cri.go:89] found id: ""
	I0717 01:58:20.647922   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.647931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:20.647936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:20.647984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:20.683783   71929 cri.go:89] found id: ""
	I0717 01:58:20.683816   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.683827   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:20.683834   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:20.683892   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:20.721803   71929 cri.go:89] found id: ""
	I0717 01:58:20.721833   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.721844   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:20.721851   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:20.721910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:20.756148   71929 cri.go:89] found id: ""
	I0717 01:58:20.756177   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.756189   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:20.756196   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:20.756259   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:20.795976   71929 cri.go:89] found id: ""
	I0717 01:58:20.796014   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.796028   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:20.796036   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:20.796095   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:20.833775   71929 cri.go:89] found id: ""
	I0717 01:58:20.833805   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.833816   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:20.833824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:20.833891   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:20.869138   71929 cri.go:89] found id: ""
	I0717 01:58:20.869163   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.869173   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:20.869180   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:20.869237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:20.904865   71929 cri.go:89] found id: ""
	I0717 01:58:20.904893   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.904901   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:20.904910   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:20.904920   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:20.947268   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:20.947294   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:20.998541   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:20.998582   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:21.013797   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:21.013828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:21.085101   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:21.085127   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:21.085141   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:23.667361   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:23.681768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:23.681828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:23.717721   71929 cri.go:89] found id: ""
	I0717 01:58:23.717748   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.717757   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:23.717763   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:23.717827   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:23.752699   71929 cri.go:89] found id: ""
	I0717 01:58:23.752728   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.752738   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:23.752745   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:23.752809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:23.790914   71929 cri.go:89] found id: ""
	I0717 01:58:23.790944   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.790955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:23.790962   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:23.791021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:23.827253   71929 cri.go:89] found id: ""
	I0717 01:58:23.827276   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.827285   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:23.827338   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:23.827392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:23.864466   71929 cri.go:89] found id: ""
	I0717 01:58:23.864510   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.864520   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:23.864527   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:23.864577   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:23.900734   71929 cri.go:89] found id: ""
	I0717 01:58:23.900775   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.900786   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:23.900794   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:23.900855   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:23.937212   71929 cri.go:89] found id: ""
	I0717 01:58:23.937236   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.937243   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:23.937249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:23.937304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:23.973730   71929 cri.go:89] found id: ""
	I0717 01:58:23.973755   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.973764   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:23.973774   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:23.973786   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:24.026122   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:24.026163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:24.040755   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:24.040784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:24.112224   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:24.112254   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:24.112277   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:24.195247   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:24.195281   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:26.738042   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:26.751545   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:26.751602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:26.786778   71929 cri.go:89] found id: ""
	I0717 01:58:26.786813   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.786824   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:26.786831   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:26.786889   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:26.828776   71929 cri.go:89] found id: ""
	I0717 01:58:26.828806   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.828818   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:26.828825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:26.828887   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:26.868439   71929 cri.go:89] found id: ""
	I0717 01:58:26.868468   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.868479   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:26.868486   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:26.868546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:26.900249   71929 cri.go:89] found id: ""
	I0717 01:58:26.900282   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.900292   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:26.900297   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:26.900344   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:26.933763   71929 cri.go:89] found id: ""
	I0717 01:58:26.933798   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.933808   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:26.933816   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:26.933882   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:26.968681   71929 cri.go:89] found id: ""
	I0717 01:58:26.968712   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.968722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:26.968729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:26.968788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:27.002081   71929 cri.go:89] found id: ""
	I0717 01:58:27.002113   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.002128   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:27.002135   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:27.002196   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:27.035138   71929 cri.go:89] found id: ""
	I0717 01:58:27.035161   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.035170   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:27.035177   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:27.035189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:27.091207   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:27.091244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:27.105765   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:27.105793   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:27.175533   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:27.175563   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:27.175580   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:27.260903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:27.260951   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:29.802451   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:29.816503   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:29.816573   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:29.854887   71929 cri.go:89] found id: ""
	I0717 01:58:29.854921   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.854931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:29.854936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:29.854983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:29.887529   71929 cri.go:89] found id: ""
	I0717 01:58:29.887559   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.887570   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:29.887577   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:29.887638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:29.924995   71929 cri.go:89] found id: ""
	I0717 01:58:29.925020   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.925028   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:29.925034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:29.925091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:29.960064   71929 cri.go:89] found id: ""
	I0717 01:58:29.960092   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.960104   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:29.960111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:29.960178   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:29.995408   71929 cri.go:89] found id: ""
	I0717 01:58:29.995431   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.995438   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:29.995443   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:29.995494   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:30.028219   71929 cri.go:89] found id: ""
	I0717 01:58:30.028247   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.028254   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:30.028260   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:30.028309   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:30.062529   71929 cri.go:89] found id: ""
	I0717 01:58:30.062576   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.062589   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:30.062597   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:30.062664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:30.095854   71929 cri.go:89] found id: ""
	I0717 01:58:30.095882   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.095893   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:30.095904   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:30.095919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:30.148083   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:30.148114   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:30.161861   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:30.161892   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:30.236474   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:30.236503   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:30.236519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:30.319691   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:30.319720   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:32.867821   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:32.881480   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:32.881541   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:32.918289   71929 cri.go:89] found id: ""
	I0717 01:58:32.918316   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.918327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:32.918335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:32.918396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:32.955383   71929 cri.go:89] found id: ""
	I0717 01:58:32.955417   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.955426   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:32.955433   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:32.955498   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:32.990432   71929 cri.go:89] found id: ""
	I0717 01:58:32.990460   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.990467   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:32.990472   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:32.990531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:33.034653   71929 cri.go:89] found id: ""
	I0717 01:58:33.034685   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.034697   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:33.034703   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:33.034763   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:33.077875   71929 cri.go:89] found id: ""
	I0717 01:58:33.077911   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.077919   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:33.077926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:33.077988   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:33.114800   71929 cri.go:89] found id: ""
	I0717 01:58:33.114840   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.114852   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:33.114864   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:33.114946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:33.151095   71929 cri.go:89] found id: ""
	I0717 01:58:33.151229   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.151242   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:33.151249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:33.151324   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:33.190100   71929 cri.go:89] found id: ""
	I0717 01:58:33.190128   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.190138   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:33.190149   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:33.190163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:33.271195   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:33.271231   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:33.317539   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:33.317569   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.370188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:33.370224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:33.385016   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:33.385045   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:33.460017   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:35.960499   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:35.974504   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:35.974583   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:36.008652   71929 cri.go:89] found id: ""
	I0717 01:58:36.008696   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.008704   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:36.008710   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:36.008770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:36.044068   71929 cri.go:89] found id: ""
	I0717 01:58:36.044097   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.044106   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:36.044113   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:36.044174   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:36.083572   71929 cri.go:89] found id: ""
	I0717 01:58:36.083602   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.083610   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:36.083616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:36.083682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:36.116716   71929 cri.go:89] found id: ""
	I0717 01:58:36.116744   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.116753   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:36.116761   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:36.116820   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:36.156042   71929 cri.go:89] found id: ""
	I0717 01:58:36.156069   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.156080   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:36.156087   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:36.156148   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:36.192005   71929 cri.go:89] found id: ""
	I0717 01:58:36.192033   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.192045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:36.192055   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:36.192116   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:36.228720   71929 cri.go:89] found id: ""
	I0717 01:58:36.228751   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.228763   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:36.228769   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:36.228817   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:36.263835   71929 cri.go:89] found id: ""
	I0717 01:58:36.263862   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.263872   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:36.263882   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:36.263897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:36.278545   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:36.278609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:36.361182   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:36.361208   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:36.361225   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:36.447797   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:36.447832   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:36.492167   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:36.492196   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:39.045613   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:39.058615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:39.058688   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:39.094625   71929 cri.go:89] found id: ""
	I0717 01:58:39.094672   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.094684   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:39.094692   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:39.094755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:39.132856   71929 cri.go:89] found id: ""
	I0717 01:58:39.132887   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.132898   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:39.132905   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:39.132966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:39.171017   71929 cri.go:89] found id: ""
	I0717 01:58:39.171037   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.171044   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:39.171051   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:39.171112   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:39.210146   71929 cri.go:89] found id: ""
	I0717 01:58:39.210176   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.210186   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:39.210193   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:39.210269   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:39.244307   71929 cri.go:89] found id: ""
	I0717 01:58:39.244332   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.244342   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:39.244349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:39.244411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:39.279649   71929 cri.go:89] found id: ""
	I0717 01:58:39.279675   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.279682   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:39.279688   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:39.279755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:39.317699   71929 cri.go:89] found id: ""
	I0717 01:58:39.317726   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.317735   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:39.317742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:39.317789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:39.352319   71929 cri.go:89] found id: ""
	I0717 01:58:39.352351   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.352365   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:39.352377   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:39.352392   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:39.404153   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:39.404188   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:39.419796   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:39.419828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:39.495463   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:39.495485   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:39.495499   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:39.576742   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:39.576795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.132481   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:42.145588   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:42.145658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:42.181231   71929 cri.go:89] found id: ""
	I0717 01:58:42.181257   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.181265   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:42.181270   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:42.181321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:42.216876   71929 cri.go:89] found id: ""
	I0717 01:58:42.216905   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.216917   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:42.216923   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:42.216984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:42.256918   71929 cri.go:89] found id: ""
	I0717 01:58:42.256948   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.256959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:42.256967   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:42.257022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:42.291930   71929 cri.go:89] found id: ""
	I0717 01:58:42.291957   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.291964   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:42.291975   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:42.292035   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:42.329927   71929 cri.go:89] found id: ""
	I0717 01:58:42.329954   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.329964   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:42.329970   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:42.330014   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:42.364041   71929 cri.go:89] found id: ""
	I0717 01:58:42.364072   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.364085   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:42.364093   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:42.364150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:42.400751   71929 cri.go:89] found id: ""
	I0717 01:58:42.400775   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.400784   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:42.400790   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:42.400840   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:42.438200   71929 cri.go:89] found id: ""
	I0717 01:58:42.438228   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.438240   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:42.438251   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:42.438265   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:42.455268   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:42.455303   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:42.537344   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:42.537368   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:42.537381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:42.618487   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:42.618522   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.661273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:42.661299   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:45.212631   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:45.226247   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:45.226330   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:45.263067   71929 cri.go:89] found id: ""
	I0717 01:58:45.263098   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.263110   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:45.263117   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:45.263177   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:45.299025   71929 cri.go:89] found id: ""
	I0717 01:58:45.299056   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.299067   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:45.299074   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:45.299137   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:45.346828   71929 cri.go:89] found id: ""
	I0717 01:58:45.346858   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.346868   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:45.346876   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:45.346938   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:45.390879   71929 cri.go:89] found id: ""
	I0717 01:58:45.390905   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.390913   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:45.390918   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:45.390966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:45.426794   71929 cri.go:89] found id: ""
	I0717 01:58:45.426823   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.426834   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:45.426841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:45.426902   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:45.463834   71929 cri.go:89] found id: ""
	I0717 01:58:45.463863   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.463873   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:45.463880   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:45.463942   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:45.500660   71929 cri.go:89] found id: ""
	I0717 01:58:45.500689   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.500701   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:45.500708   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:45.500766   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:45.537332   71929 cri.go:89] found id: ""
	I0717 01:58:45.537356   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.537364   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:45.537373   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:45.537388   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:45.551194   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:45.551222   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:45.623863   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:45.623892   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:45.623906   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:45.699740   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:45.699782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:45.739580   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:45.739613   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:48.300789   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:48.315608   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:48.315667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:48.353050   71929 cri.go:89] found id: ""
	I0717 01:58:48.353076   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.353084   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:48.353089   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:48.353133   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:48.394789   71929 cri.go:89] found id: ""
	I0717 01:58:48.394817   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.394829   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:48.394837   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:48.394900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:48.433430   71929 cri.go:89] found id: ""
	I0717 01:58:48.433457   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.433468   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:48.433475   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:48.433530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:48.467215   71929 cri.go:89] found id: ""
	I0717 01:58:48.467243   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.467254   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:48.467262   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:48.467318   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:48.501087   71929 cri.go:89] found id: ""
	I0717 01:58:48.501120   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.501131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:48.501138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:48.501204   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:48.538648   71929 cri.go:89] found id: ""
	I0717 01:58:48.538683   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.538696   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:48.538706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:48.538762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:48.573006   71929 cri.go:89] found id: ""
	I0717 01:58:48.573030   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.573040   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:48.573047   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:48.573106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:48.608779   71929 cri.go:89] found id: ""
	I0717 01:58:48.608803   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.608813   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:48.608824   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:48.608837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:48.659250   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:48.659290   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:48.673418   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:48.673449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:48.748175   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:48.748196   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:48.748207   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:48.824238   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:48.824274   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:51.367155   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:51.382458   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:51.382527   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:51.424005   71929 cri.go:89] found id: ""
	I0717 01:58:51.424040   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.424051   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:51.424059   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:51.424117   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:51.463318   71929 cri.go:89] found id: ""
	I0717 01:58:51.463348   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.463357   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:51.463363   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:51.463414   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:51.502261   71929 cri.go:89] found id: ""
	I0717 01:58:51.502290   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.502301   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:51.502309   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:51.502362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:51.536277   71929 cri.go:89] found id: ""
	I0717 01:58:51.536308   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.536319   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:51.536327   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:51.536392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:51.580598   71929 cri.go:89] found id: ""
	I0717 01:58:51.580629   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.580640   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:51.580648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:51.580726   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:51.618666   71929 cri.go:89] found id: ""
	I0717 01:58:51.618690   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.618697   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:51.618702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:51.618747   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:51.654742   71929 cri.go:89] found id: ""
	I0717 01:58:51.654777   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.654790   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:51.654799   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:51.654863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:51.698006   71929 cri.go:89] found id: ""
	I0717 01:58:51.698034   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.698043   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:51.698051   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:51.698062   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:51.754812   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:51.754852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:51.771887   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:51.771919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:51.859627   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:51.859657   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:51.859675   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:51.946633   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:51.946673   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:54.494188   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:54.509111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:54.509190   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:54.546424   71929 cri.go:89] found id: ""
	I0717 01:58:54.546454   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.546464   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:54.546471   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:54.546532   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:54.586811   71929 cri.go:89] found id: ""
	I0717 01:58:54.586841   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.586853   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:54.586860   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:54.586918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:54.627350   71929 cri.go:89] found id: ""
	I0717 01:58:54.627375   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.627383   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:54.627388   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:54.627438   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:54.665901   71929 cri.go:89] found id: ""
	I0717 01:58:54.665941   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.665954   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:54.665974   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:54.666041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:54.702921   71929 cri.go:89] found id: ""
	I0717 01:58:54.702948   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.702958   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:54.702965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:54.703027   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:54.737378   71929 cri.go:89] found id: ""
	I0717 01:58:54.737406   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.737414   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:54.737421   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:54.737469   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:54.771924   71929 cri.go:89] found id: ""
	I0717 01:58:54.771954   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.771964   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:54.771971   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:54.772055   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:54.812939   71929 cri.go:89] found id: ""
	I0717 01:58:54.812972   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.812983   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:54.812995   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:54.813010   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:54.862979   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:54.863013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:54.877467   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:54.877504   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:54.953924   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:54.953950   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:54.953963   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:55.032019   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:55.032052   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:57.573130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:57.591689   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:57.591762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:57.626444   71929 cri.go:89] found id: ""
	I0717 01:58:57.626469   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.626479   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:57.626486   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:57.626570   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:57.661280   71929 cri.go:89] found id: ""
	I0717 01:58:57.661305   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.661314   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:57.661321   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:57.661376   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:57.695678   71929 cri.go:89] found id: ""
	I0717 01:58:57.695703   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.695711   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:57.695717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:57.695762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:57.729705   71929 cri.go:89] found id: ""
	I0717 01:58:57.729734   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.729742   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:57.729748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:57.729804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:57.763338   71929 cri.go:89] found id: ""
	I0717 01:58:57.763365   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.763373   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:57.763387   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:57.763433   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:57.800576   71929 cri.go:89] found id: ""
	I0717 01:58:57.800600   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.800608   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:57.800615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:57.800701   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:57.842401   71929 cri.go:89] found id: ""
	I0717 01:58:57.842428   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.842439   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:57.842446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:57.842503   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:57.880355   71929 cri.go:89] found id: ""
	I0717 01:58:57.880379   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.880387   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:57.880395   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:57.880412   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:57.938215   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:57.938252   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:57.952835   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:57.952876   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:58.027203   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:58.027231   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:58.027246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:58.108442   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:58.108483   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:00.648580   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:00.662596   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:00.662667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:00.696315   71929 cri.go:89] found id: ""
	I0717 01:59:00.696342   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.696351   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:00.696356   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:00.696411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:00.732117   71929 cri.go:89] found id: ""
	I0717 01:59:00.732147   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.732158   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:00.732164   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:00.732212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:00.768747   71929 cri.go:89] found id: ""
	I0717 01:59:00.768779   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.768790   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:00.768797   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:00.768856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:00.807557   71929 cri.go:89] found id: ""
	I0717 01:59:00.807585   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.807592   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:00.807598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:00.807651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:00.844127   71929 cri.go:89] found id: ""
	I0717 01:59:00.844152   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.844161   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:00.844166   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:00.844222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:00.879565   71929 cri.go:89] found id: ""
	I0717 01:59:00.879590   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.879597   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:00.879613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:00.879684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:00.917352   71929 cri.go:89] found id: ""
	I0717 01:59:00.917379   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.917387   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:00.917393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:00.917440   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:00.952603   71929 cri.go:89] found id: ""
	I0717 01:59:00.952630   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.952637   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:00.952647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:00.952688   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:01.007203   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:01.007242   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:01.021476   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:01.021512   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:01.102283   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:01.102306   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:01.102320   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:01.175736   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:01.175771   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:03.717612   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:03.732446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:03.732511   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:03.767485   71929 cri.go:89] found id: ""
	I0717 01:59:03.767519   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.767533   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:03.767542   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:03.767607   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:03.803961   71929 cri.go:89] found id: ""
	I0717 01:59:03.803989   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.804000   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:03.804007   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:03.804074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:03.842734   71929 cri.go:89] found id: ""
	I0717 01:59:03.842768   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.842780   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:03.842788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:03.842915   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:03.883571   71929 cri.go:89] found id: ""
	I0717 01:59:03.883598   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.883608   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:03.883616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:03.883682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:03.922037   71929 cri.go:89] found id: ""
	I0717 01:59:03.922065   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.922076   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:03.922084   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:03.922143   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:03.961135   71929 cri.go:89] found id: ""
	I0717 01:59:03.961165   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.961176   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:03.961183   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:03.961244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:03.995542   71929 cri.go:89] found id: ""
	I0717 01:59:03.995570   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.995580   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:03.995589   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:03.995647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:04.030142   71929 cri.go:89] found id: ""
	I0717 01:59:04.030170   71929 logs.go:276] 0 containers: []
	W0717 01:59:04.030178   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:04.030187   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:04.030198   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:04.110329   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:04.110366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:04.152194   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:04.152224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:04.204012   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:04.204048   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:04.218261   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:04.218291   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:04.290786   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:06.791166   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:06.806662   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:06.806722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:06.841447   71929 cri.go:89] found id: ""
	I0717 01:59:06.841476   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.841486   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:06.841494   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:06.841554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:06.879920   71929 cri.go:89] found id: ""
	I0717 01:59:06.879956   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.879971   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:06.879976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:06.880033   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:06.914436   71929 cri.go:89] found id: ""
	I0717 01:59:06.914465   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.914476   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:06.914484   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:06.914566   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:06.952098   71929 cri.go:89] found id: ""
	I0717 01:59:06.952127   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.952135   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:06.952141   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:06.952187   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:06.988054   71929 cri.go:89] found id: ""
	I0717 01:59:06.988085   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.988096   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:06.988103   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:06.988168   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:07.026633   71929 cri.go:89] found id: ""
	I0717 01:59:07.026658   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.026670   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:07.026676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:07.026732   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:07.064433   71929 cri.go:89] found id: ""
	I0717 01:59:07.064454   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.064463   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:07.064468   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:07.064545   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:07.108352   71929 cri.go:89] found id: ""
	I0717 01:59:07.108385   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.108396   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:07.108410   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:07.108428   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:07.163554   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:07.163593   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:07.177221   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:07.177249   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:07.249712   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:07.249735   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:07.249748   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:07.333011   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:07.333044   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:09.873187   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:09.887579   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:09.887658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:09.923675   71929 cri.go:89] found id: ""
	I0717 01:59:09.923706   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.923716   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:09.923724   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:09.923789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:09.961248   71929 cri.go:89] found id: ""
	I0717 01:59:09.961278   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.961288   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:09.961296   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:09.961354   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:10.000069   71929 cri.go:89] found id: ""
	I0717 01:59:10.000094   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.000101   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:10.000107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:10.000157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:10.036784   71929 cri.go:89] found id: ""
	I0717 01:59:10.036808   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.036815   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:10.036820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:10.036869   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:10.072746   71929 cri.go:89] found id: ""
	I0717 01:59:10.072778   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.072789   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:10.072796   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:10.072856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:10.109520   71929 cri.go:89] found id: ""
	I0717 01:59:10.109544   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.109552   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:10.109557   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:10.109608   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:10.142521   71929 cri.go:89] found id: ""
	I0717 01:59:10.142565   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.142576   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:10.142584   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:10.142647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:10.175772   71929 cri.go:89] found id: ""
	I0717 01:59:10.175800   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.175812   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:10.175823   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:10.175837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:10.213534   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:10.213561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:10.266449   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:10.266485   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:10.282204   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:10.282234   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:10.353974   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:10.354004   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:10.354017   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:12.936509   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:12.951547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:12.951616   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:12.987833   71929 cri.go:89] found id: ""
	I0717 01:59:12.987860   71929 logs.go:276] 0 containers: []
	W0717 01:59:12.987868   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:12.987873   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:12.987922   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:13.026500   71929 cri.go:89] found id: ""
	I0717 01:59:13.026529   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.026539   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:13.026546   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:13.026625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:13.061631   71929 cri.go:89] found id: ""
	I0717 01:59:13.061664   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.061674   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:13.061682   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:13.061745   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:13.099449   71929 cri.go:89] found id: ""
	I0717 01:59:13.099476   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.099487   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:13.099494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:13.099565   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:13.137271   71929 cri.go:89] found id: ""
	I0717 01:59:13.137299   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.137309   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:13.137317   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:13.137384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:13.174432   71929 cri.go:89] found id: ""
	I0717 01:59:13.174462   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.174472   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:13.174478   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:13.174539   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:13.212820   71929 cri.go:89] found id: ""
	I0717 01:59:13.212845   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.212855   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:13.212865   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:13.212930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:13.245961   71929 cri.go:89] found id: ""
	I0717 01:59:13.245993   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.246004   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:13.246014   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:13.246028   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:13.284801   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:13.284828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.338476   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:13.338511   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:13.352751   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:13.352777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:13.434001   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:13.434035   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:13.434050   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.022525   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:16.036863   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:16.036941   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:16.074370   71929 cri.go:89] found id: ""
	I0717 01:59:16.074398   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.074409   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:16.074416   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:16.074476   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:16.112239   71929 cri.go:89] found id: ""
	I0717 01:59:16.112267   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.112276   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:16.112281   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:16.112329   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:16.147398   71929 cri.go:89] found id: ""
	I0717 01:59:16.147422   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.147429   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:16.147435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:16.147490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:16.182112   71929 cri.go:89] found id: ""
	I0717 01:59:16.182141   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.182149   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:16.182155   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:16.182203   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:16.219134   71929 cri.go:89] found id: ""
	I0717 01:59:16.219163   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.219174   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:16.219182   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:16.219238   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:16.255892   71929 cri.go:89] found id: ""
	I0717 01:59:16.255924   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.255934   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:16.255943   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:16.256003   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:16.291202   71929 cri.go:89] found id: ""
	I0717 01:59:16.291228   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.291238   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:16.291245   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:16.291304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:16.330748   71929 cri.go:89] found id: ""
	I0717 01:59:16.330779   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.330790   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:16.330801   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:16.330815   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:16.344628   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:16.344668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:16.415735   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:16.415761   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:16.415775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.499411   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:16.499449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:16.541244   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:16.541270   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:19.095060   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:19.107920   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:19.107976   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:19.143446   71929 cri.go:89] found id: ""
	I0717 01:59:19.143476   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.143485   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:19.143490   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:19.143550   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:19.179216   71929 cri.go:89] found id: ""
	I0717 01:59:19.179247   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.179259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:19.179266   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:19.179317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:19.212468   71929 cri.go:89] found id: ""
	I0717 01:59:19.212498   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.212508   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:19.212516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:19.212574   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:19.245019   71929 cri.go:89] found id: ""
	I0717 01:59:19.245047   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.245058   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:19.245065   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:19.245123   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:19.278430   71929 cri.go:89] found id: ""
	I0717 01:59:19.278457   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.278467   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:19.278474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:19.278530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:19.317685   71929 cri.go:89] found id: ""
	I0717 01:59:19.317714   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.317722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:19.317729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:19.317783   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:19.352938   71929 cri.go:89] found id: ""
	I0717 01:59:19.352974   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.352986   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:19.353000   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:19.353052   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:19.387238   71929 cri.go:89] found id: ""
	I0717 01:59:19.387272   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.387283   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:19.387295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:19.387314   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:19.440138   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:19.440171   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:19.456372   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:19.456402   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:19.527881   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:19.527906   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:19.527921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:19.611903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:19.611937   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:22.160422   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:22.172802   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:22.172862   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:22.209283   71929 cri.go:89] found id: ""
	I0717 01:59:22.209315   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.209327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:22.209335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:22.209396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:22.243927   71929 cri.go:89] found id: ""
	I0717 01:59:22.243955   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.243965   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:22.243972   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:22.244022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:22.276730   71929 cri.go:89] found id: ""
	I0717 01:59:22.276754   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.276761   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:22.276767   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:22.276814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:22.319378   71929 cri.go:89] found id: ""
	I0717 01:59:22.319407   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.319418   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:22.319425   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:22.319482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:22.358272   71929 cri.go:89] found id: ""
	I0717 01:59:22.358298   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.358307   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:22.358312   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:22.358362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:22.395358   71929 cri.go:89] found id: ""
	I0717 01:59:22.395393   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.395405   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:22.395414   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:22.395477   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:22.435158   71929 cri.go:89] found id: ""
	I0717 01:59:22.435184   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.435194   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:22.435201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:22.435248   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:22.471553   71929 cri.go:89] found id: ""
	I0717 01:59:22.471588   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.471595   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:22.471604   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:22.471616   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:22.523133   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:22.523169   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:22.539212   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:22.539246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:22.615707   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:22.615729   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:22.615744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:22.696758   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:22.696795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:25.238496   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:25.252882   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:25.252946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:25.290173   71929 cri.go:89] found id: ""
	I0717 01:59:25.290197   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.290205   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:25.290210   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:25.290263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:25.325926   71929 cri.go:89] found id: ""
	I0717 01:59:25.325968   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.325979   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:25.325985   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:25.326032   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:25.358310   71929 cri.go:89] found id: ""
	I0717 01:59:25.358362   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.358371   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:25.358377   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:25.358426   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:25.393575   71929 cri.go:89] found id: ""
	I0717 01:59:25.393605   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.393615   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:25.393622   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:25.393684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:25.429357   71929 cri.go:89] found id: ""
	I0717 01:59:25.429448   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.429466   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:25.429474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:25.429546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:25.466992   71929 cri.go:89] found id: ""
	I0717 01:59:25.467020   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.467028   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:25.467034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:25.467080   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:25.503545   71929 cri.go:89] found id: ""
	I0717 01:59:25.503575   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.503587   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:25.503594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:25.503643   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:25.542957   71929 cri.go:89] found id: ""
	I0717 01:59:25.542983   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.542993   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:25.543003   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:25.543015   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:25.598813   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:25.598852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:25.618060   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:25.618098   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:25.690079   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:25.690105   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:25.690119   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:25.765956   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:25.765994   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:28.311715   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:28.325493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:28.325554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:28.365783   71929 cri.go:89] found id: ""
	I0717 01:59:28.365810   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.365821   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:28.365829   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:28.365885   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:28.401847   71929 cri.go:89] found id: ""
	I0717 01:59:28.401875   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.401883   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:28.401895   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:28.401954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:28.442236   71929 cri.go:89] found id: ""
	I0717 01:59:28.442261   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.442272   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:28.442278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:28.442340   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:28.476832   71929 cri.go:89] found id: ""
	I0717 01:59:28.476857   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.476866   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:28.476873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:28.476928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:28.512040   71929 cri.go:89] found id: ""
	I0717 01:59:28.512068   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.512075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:28.512081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:28.512136   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:28.547516   71929 cri.go:89] found id: ""
	I0717 01:59:28.547547   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.547558   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:28.547566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:28.547625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:28.580380   71929 cri.go:89] found id: ""
	I0717 01:59:28.580406   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.580417   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:28.580427   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:28.580485   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:28.616029   71929 cri.go:89] found id: ""
	I0717 01:59:28.616059   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.616069   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:28.616080   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:28.616095   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:28.670188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:28.670230   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:28.687315   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:28.687355   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:28.763591   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:28.763612   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:28.763627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:28.848925   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:28.848959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:31.388294   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:31.404748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:31.404814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:31.446437   71929 cri.go:89] found id: ""
	I0717 01:59:31.446468   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.446478   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:31.446484   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:31.446531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:31.487797   71929 cri.go:89] found id: ""
	I0717 01:59:31.487828   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.487840   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:31.487847   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:31.487895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:31.525327   71929 cri.go:89] found id: ""
	I0717 01:59:31.525354   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.525368   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:31.525375   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:31.525436   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:31.564106   71929 cri.go:89] found id: ""
	I0717 01:59:31.564154   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.564166   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:31.564173   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:31.564234   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:31.603345   71929 cri.go:89] found id: ""
	I0717 01:59:31.603374   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.603385   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:31.603393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:31.603456   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:31.641727   71929 cri.go:89] found id: ""
	I0717 01:59:31.641753   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.641769   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:31.641776   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:31.641837   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:31.680825   71929 cri.go:89] found id: ""
	I0717 01:59:31.680856   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.680866   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:31.680873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:31.680930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:31.714325   71929 cri.go:89] found id: ""
	I0717 01:59:31.714348   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.714355   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:31.714363   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:31.714374   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:31.765899   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:31.765934   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:31.781417   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:31.781447   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:31.857586   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:31.857607   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:31.857622   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:31.937171   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:31.937197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:34.478176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:34.492153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:34.492223   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:34.526959   71929 cri.go:89] found id: ""
	I0717 01:59:34.526984   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.526998   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:34.527006   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:34.527064   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:34.564485   71929 cri.go:89] found id: ""
	I0717 01:59:34.564534   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.564546   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:34.564591   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:34.564706   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:34.604611   71929 cri.go:89] found id: ""
	I0717 01:59:34.604637   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.604649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:34.604657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:34.604718   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:34.640851   71929 cri.go:89] found id: ""
	I0717 01:59:34.640882   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.640892   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:34.640897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:34.640956   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:34.675828   71929 cri.go:89] found id: ""
	I0717 01:59:34.675856   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.675863   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:34.675869   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:34.675918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:34.710468   71929 cri.go:89] found id: ""
	I0717 01:59:34.710496   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.710506   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:34.710514   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:34.710595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:34.749218   71929 cri.go:89] found id: ""
	I0717 01:59:34.749249   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.749260   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:34.749267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:34.749328   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:34.784934   71929 cri.go:89] found id: ""
	I0717 01:59:34.784969   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.784979   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:34.784990   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:34.785006   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:34.799836   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:34.799870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:34.870218   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:34.870239   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:34.870254   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:34.948782   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:34.948817   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:34.992295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:34.992324   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:37.545759   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:37.559648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:37.559724   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:37.596642   71929 cri.go:89] found id: ""
	I0717 01:59:37.596696   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.596707   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:37.596715   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:37.596770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:37.637251   71929 cri.go:89] found id: ""
	I0717 01:59:37.637283   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.637312   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:37.637318   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:37.637372   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:37.672811   71929 cri.go:89] found id: ""
	I0717 01:59:37.672839   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.672847   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:37.672852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:37.672909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:37.706864   71929 cri.go:89] found id: ""
	I0717 01:59:37.706904   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.706916   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:37.706923   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:37.706975   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:37.747539   71929 cri.go:89] found id: ""
	I0717 01:59:37.747567   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.747576   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:37.747581   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:37.747630   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:37.785229   71929 cri.go:89] found id: ""
	I0717 01:59:37.785251   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.785260   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:37.785268   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:37.785333   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:37.840428   71929 cri.go:89] found id: ""
	I0717 01:59:37.840460   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.840471   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:37.840477   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:37.840533   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:37.876888   71929 cri.go:89] found id: ""
	I0717 01:59:37.876916   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.876924   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:37.876932   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:37.876942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:37.926161   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:37.926197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:37.940857   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:37.940885   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:38.019210   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:38.019232   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:38.019245   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:38.112428   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:38.112471   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:40.657215   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:40.670824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:40.670900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:40.704008   71929 cri.go:89] found id: ""
	I0717 01:59:40.704030   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.704040   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:40.704048   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:40.704102   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:40.739544   71929 cri.go:89] found id: ""
	I0717 01:59:40.739576   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.739587   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:40.739595   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:40.739664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:40.773132   71929 cri.go:89] found id: ""
	I0717 01:59:40.773159   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.773169   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:40.773177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:40.773239   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:40.810162   71929 cri.go:89] found id: ""
	I0717 01:59:40.810183   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.810193   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:40.810200   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:40.810256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:40.844797   71929 cri.go:89] found id: ""
	I0717 01:59:40.844829   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.844840   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:40.844847   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:40.844918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:40.884444   71929 cri.go:89] found id: ""
	I0717 01:59:40.884468   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.884476   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:40.884482   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:40.884544   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:40.919413   71929 cri.go:89] found id: ""
	I0717 01:59:40.919437   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.919445   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:40.919451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:40.919505   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:40.961870   71929 cri.go:89] found id: ""
	I0717 01:59:40.961894   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.961902   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:40.961910   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:40.961921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:41.010600   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:41.010638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:41.025557   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:41.025589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:41.100100   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:41.100123   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:41.100135   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:41.185809   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:41.185850   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.725542   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:43.739304   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.739379   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.776754   71929 cri.go:89] found id: ""
	I0717 01:59:43.776783   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.776795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:43.776802   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.776863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.819729   71929 cri.go:89] found id: ""
	I0717 01:59:43.819756   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.819767   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:43.819774   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.819828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.860283   71929 cri.go:89] found id: ""
	I0717 01:59:43.860311   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.860322   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:43.860329   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.860391   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.898684   71929 cri.go:89] found id: ""
	I0717 01:59:43.898712   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.898722   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:43.898729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.898788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.942996   71929 cri.go:89] found id: ""
	I0717 01:59:43.943019   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.943026   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:43.943031   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.943077   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.981799   71929 cri.go:89] found id: ""
	I0717 01:59:43.981828   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.981839   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:43.981846   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.981903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:44.018222   71929 cri.go:89] found id: ""
	I0717 01:59:44.018252   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.018262   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:44.018267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:44.018326   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:44.056264   71929 cri.go:89] found id: ""
	I0717 01:59:44.056293   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.056304   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:44.056315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.056334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.172061   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:44.172108   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:44.219597   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:44.219627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:44.272299   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.272334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.287811   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:44.287848   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:44.379183   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:46.879529   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:46.893142   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:46.893207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:46.929073   71929 cri.go:89] found id: ""
	I0717 01:59:46.929101   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.929113   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:46.929121   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:46.929173   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:46.963697   71929 cri.go:89] found id: ""
	I0717 01:59:46.963725   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.963733   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:46.963739   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:46.963798   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.000697   71929 cri.go:89] found id: ""
	I0717 01:59:47.000730   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.000747   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:47.000752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.000804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.037270   71929 cri.go:89] found id: ""
	I0717 01:59:47.037304   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.037316   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:47.037323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.037382   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.072210   71929 cri.go:89] found id: ""
	I0717 01:59:47.072238   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.072249   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:47.072256   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.072321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.108404   71929 cri.go:89] found id: ""
	I0717 01:59:47.108432   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.108443   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:47.108451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.108535   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.146122   71929 cri.go:89] found id: ""
	I0717 01:59:47.146151   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.146162   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.146169   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:47.146225   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:47.187418   71929 cri.go:89] found id: ""
	I0717 01:59:47.187446   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.187455   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:47.187466   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:47.187481   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:47.201023   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:47.201053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:47.269851   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:47.269878   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.269894   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:47.356417   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:47.356456   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:47.397763   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:47.397791   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:49.954670   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:49.968840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:49.968898   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:50.003598   71929 cri.go:89] found id: ""
	I0717 01:59:50.003635   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.003646   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:50.003654   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:50.003714   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:50.040494   71929 cri.go:89] found id: ""
	I0717 01:59:50.040546   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.040558   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:50.040564   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:50.040624   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:50.074921   71929 cri.go:89] found id: ""
	I0717 01:59:50.074950   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.074959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:50.074965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:50.075015   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:50.117002   71929 cri.go:89] found id: ""
	I0717 01:59:50.117030   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.117041   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:50.117049   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:50.117106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:50.163026   71929 cri.go:89] found id: ""
	I0717 01:59:50.163052   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.163063   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:50.163071   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:50.163129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:50.197709   71929 cri.go:89] found id: ""
	I0717 01:59:50.197738   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.197749   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:50.197757   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:50.197838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:50.237776   71929 cri.go:89] found id: ""
	I0717 01:59:50.237808   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.237819   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:50.237827   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:50.237886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:50.275147   71929 cri.go:89] found id: ""
	I0717 01:59:50.275179   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.275189   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:50.275201   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:50.275215   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:50.329025   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:50.329057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:50.342745   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:50.342777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:50.417792   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:50.417817   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:50.417829   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:50.495288   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:50.495322   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:53.036151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:53.049820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:53.049879   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:53.087144   71929 cri.go:89] found id: ""
	I0717 01:59:53.087175   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.087189   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:53.087195   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:53.087253   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:53.123135   71929 cri.go:89] found id: ""
	I0717 01:59:53.123164   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.123175   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:53.123191   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:53.123254   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:53.157887   71929 cri.go:89] found id: ""
	I0717 01:59:53.157912   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.157922   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:53.157927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:53.158004   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:53.201002   71929 cri.go:89] found id: ""
	I0717 01:59:53.201033   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.201045   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:53.201054   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:53.201115   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:53.236159   71929 cri.go:89] found id: ""
	I0717 01:59:53.236188   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.236198   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:53.236204   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:53.236258   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:53.277585   71929 cri.go:89] found id: ""
	I0717 01:59:53.277616   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.277627   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:53.277634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:53.277694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:53.322722   71929 cri.go:89] found id: ""
	I0717 01:59:53.322747   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.322758   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:53.322765   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:53.322824   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:53.364112   71929 cri.go:89] found id: ""
	I0717 01:59:53.364138   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.364149   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:53.364159   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:53.364172   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:53.418701   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:53.418739   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:53.435004   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:53.435030   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:53.511254   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:53.511274   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:53.511287   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:53.587967   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:53.588003   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:56.130773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:56.144742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:56.144811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:56.180267   71929 cri.go:89] found id: ""
	I0717 01:59:56.180295   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.180306   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:56.180313   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:56.180373   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:56.217223   71929 cri.go:89] found id: ""
	I0717 01:59:56.217252   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.217263   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:56.217269   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:56.217334   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:56.251714   71929 cri.go:89] found id: ""
	I0717 01:59:56.251738   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.251745   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:56.251752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:56.251805   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:56.292557   71929 cri.go:89] found id: ""
	I0717 01:59:56.292589   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.292597   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:56.292603   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:56.292653   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:56.332463   71929 cri.go:89] found id: ""
	I0717 01:59:56.332491   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.332501   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:56.332508   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:56.332562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:56.372155   71929 cri.go:89] found id: ""
	I0717 01:59:56.372180   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.372189   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:56.372197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:56.372255   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:56.415768   71929 cri.go:89] found id: ""
	I0717 01:59:56.415794   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.415806   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:56.415813   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:56.415871   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:56.456920   71929 cri.go:89] found id: ""
	I0717 01:59:56.456951   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.456959   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:56.456968   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:56.456978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:56.508932   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:56.508965   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:56.522496   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:56.522531   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:56.596839   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:56.596857   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:56.596870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:56.679237   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:56.679271   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:59.220084   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:59.233108   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:59.233182   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:59.266796   71929 cri.go:89] found id: ""
	I0717 01:59:59.266827   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.266838   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:59.266845   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:59.266909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:59.297992   71929 cri.go:89] found id: ""
	I0717 01:59:59.298017   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.298026   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:59.298032   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:59.298087   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:59.331953   71929 cri.go:89] found id: ""
	I0717 01:59:59.331982   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.331993   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:59.331999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:59.332069   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:59.368912   71929 cri.go:89] found id: ""
	I0717 01:59:59.368939   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.368948   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:59.368954   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:59.369002   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:59.402886   71929 cri.go:89] found id: ""
	I0717 01:59:59.402911   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.402920   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:59.402926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:59.402982   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:59.441227   71929 cri.go:89] found id: ""
	I0717 01:59:59.441249   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.441257   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:59.441263   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:59.441322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:59.479154   71929 cri.go:89] found id: ""
	I0717 01:59:59.479191   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.479213   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:59.479222   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:59.479286   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:59.516259   71929 cri.go:89] found id: ""
	I0717 01:59:59.516299   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.516309   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:59.516319   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:59.516332   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:59.596352   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:59.596385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:59.639712   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:59.639744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:59.691399   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:59.691444   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:59.706618   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:59.706648   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:59.778875   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.279246   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:02.293212   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:02.293284   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:02.330759   71929 cri.go:89] found id: ""
	I0717 02:00:02.330786   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.330795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:02.330800   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:02.330848   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:02.366257   71929 cri.go:89] found id: ""
	I0717 02:00:02.366287   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.366298   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:02.366305   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:02.366368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:02.404321   71929 cri.go:89] found id: ""
	I0717 02:00:02.404348   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.404358   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:02.404364   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:02.404432   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:02.444297   71929 cri.go:89] found id: ""
	I0717 02:00:02.444326   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.444342   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:02.444349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:02.444406   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:02.478433   71929 cri.go:89] found id: ""
	I0717 02:00:02.478466   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.478477   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:02.478483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:02.478530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:02.515519   71929 cri.go:89] found id: ""
	I0717 02:00:02.515551   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.515560   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:02.515566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:02.515618   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:02.551006   71929 cri.go:89] found id: ""
	I0717 02:00:02.551030   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.551038   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:02.551044   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:02.551110   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:02.588312   71929 cri.go:89] found id: ""
	I0717 02:00:02.588345   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.588356   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:02.588367   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:02.588381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:02.641900   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:02.641932   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:02.656851   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:02.656896   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:02.728286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.728315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:02.728327   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:02.806807   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:02.806847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.355196   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:05.369148   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:05.369231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:05.405012   71929 cri.go:89] found id: ""
	I0717 02:00:05.405045   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.405057   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:05.405068   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:05.405132   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:05.450524   71929 cri.go:89] found id: ""
	I0717 02:00:05.450564   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.450575   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:05.450582   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:05.450637   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:05.487503   71929 cri.go:89] found id: ""
	I0717 02:00:05.487533   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.487544   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:05.487553   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:05.487634   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:05.522607   71929 cri.go:89] found id: ""
	I0717 02:00:05.522635   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.522650   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:05.522656   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:05.522703   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:05.558091   71929 cri.go:89] found id: ""
	I0717 02:00:05.558120   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.558131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:05.558138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:05.558192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:05.594540   71929 cri.go:89] found id: ""
	I0717 02:00:05.594587   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.594598   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:05.594605   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:05.594668   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:05.631783   71929 cri.go:89] found id: ""
	I0717 02:00:05.631807   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.631818   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:05.631825   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:05.631886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:05.667494   71929 cri.go:89] found id: ""
	I0717 02:00:05.667523   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.667532   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:05.667543   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:05.667559   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:05.681348   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:05.681373   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:05.747143   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:05.747165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:05.747176   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:05.829639   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:05.829674   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.881984   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:05.882013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:08.435873   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:08.449840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:08.449901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:08.489613   71929 cri.go:89] found id: ""
	I0717 02:00:08.489663   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.489675   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:08.489684   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:08.489751   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:08.526604   71929 cri.go:89] found id: ""
	I0717 02:00:08.526635   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.526645   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:08.526660   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:08.526717   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:08.563202   71929 cri.go:89] found id: ""
	I0717 02:00:08.563227   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.563234   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:08.563240   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:08.563299   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:08.598336   71929 cri.go:89] found id: ""
	I0717 02:00:08.598365   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.598376   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:08.598383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:08.598441   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:08.632626   71929 cri.go:89] found id: ""
	I0717 02:00:08.632660   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.632671   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:08.632678   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:08.632739   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:08.667951   71929 cri.go:89] found id: ""
	I0717 02:00:08.667977   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.667993   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:08.668001   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:08.668059   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:08.702106   71929 cri.go:89] found id: ""
	I0717 02:00:08.702135   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.702146   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:08.702153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:08.702212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:08.733469   71929 cri.go:89] found id: ""
	I0717 02:00:08.733491   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.733499   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:08.733508   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:08.733518   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:08.787930   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:08.787966   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:08.802761   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:08.802795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:08.878115   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:08.878138   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:08.878149   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:08.962509   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:08.962543   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:11.503151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:11.518019   71929 kubeadm.go:597] duration metric: took 4m3.576613508s to restartPrimaryControlPlane
	W0717 02:00:11.518087   71929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:00:11.518113   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:00:11.970514   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:11.986794   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:00:11.997382   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:00:12.006789   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:00:12.006816   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:00:12.006867   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:00:12.015864   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:00:12.015921   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:00:12.025239   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:00:12.034315   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:00:12.034373   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:00:12.043533   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.052344   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:00:12.052393   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.061290   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:00:12.070311   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:00:12.070375   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:00:12.080404   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:00:12.318084   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:02:08.622411   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:02:08.622531   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:02:08.624111   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:08.624168   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:08.624265   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:08.624391   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:08.624526   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:08.624604   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:08.626394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:08.626478   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:08.626574   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:08.626657   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:08.626735   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:08.626830   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:08.626909   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:08.627001   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:08.627095   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:08.627203   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:08.627325   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:08.627392   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:08.627469   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:08.627573   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:08.627663   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:08.627753   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:08.627836   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:08.627997   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:08.628107   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:08.628179   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:08.628272   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:08.630262   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:08.630372   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:08.630477   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:08.630594   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:08.630729   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:08.630960   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:08.631020   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:08.631099   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631293   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631394   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631648   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631748   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631925   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632050   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632253   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632327   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632528   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632546   71929 kubeadm.go:310] 
	I0717 02:02:08.632611   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:02:08.632671   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:02:08.632689   71929 kubeadm.go:310] 
	I0717 02:02:08.632729   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:02:08.632772   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:02:08.632902   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:02:08.632914   71929 kubeadm.go:310] 
	I0717 02:02:08.633001   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:02:08.633030   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:02:08.633075   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:02:08.633092   71929 kubeadm.go:310] 
	I0717 02:02:08.633204   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:02:08.633281   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:02:08.633306   71929 kubeadm.go:310] 
	I0717 02:02:08.633450   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:02:08.633535   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:02:08.633597   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:02:08.633668   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:02:08.633697   71929 kubeadm.go:310] 
	W0717 02:02:08.633780   71929 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 02:02:08.633821   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:02:09.101394   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:09.119918   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:02:09.130974   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:02:09.131002   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:02:09.131046   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:02:09.142720   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:02:09.142790   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:02:09.154990   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:02:09.166317   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:02:09.166379   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:02:09.176756   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.186639   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:02:09.186697   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.196778   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:02:09.206420   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:02:09.206469   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:02:09.216325   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:02:09.293311   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:09.293457   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:09.442386   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:09.442594   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:09.442736   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:09.618387   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:09.620394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:09.620496   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:09.620593   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:09.620691   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:09.620791   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:09.620909   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:09.621004   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:09.621117   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:09.621364   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:09.621778   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:09.622072   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:09.622135   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:09.622225   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:09.990964   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:10.434990   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:10.579785   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:10.723319   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:10.746923   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:10.748370   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:10.748460   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:10.888855   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:10.890727   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:10.890860   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:10.893530   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:10.894934   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:10.896825   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:10.899127   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:50.900829   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:50.901350   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:50.901626   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:55.902558   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:55.902805   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:05.903753   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:05.904033   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:25.905383   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:25.905597   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906576   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:04:05.906960   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906992   71929 kubeadm.go:310] 
	I0717 02:04:05.907049   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:04:05.907133   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:04:05.907182   71929 kubeadm.go:310] 
	I0717 02:04:05.907252   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:04:05.907339   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:04:05.907516   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:04:05.907529   71929 kubeadm.go:310] 
	I0717 02:04:05.907661   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:04:05.907699   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:04:05.907743   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:04:05.907751   71929 kubeadm.go:310] 
	I0717 02:04:05.907907   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:04:05.908043   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:04:05.908053   71929 kubeadm.go:310] 
	I0717 02:04:05.908221   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:04:05.908435   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:04:05.908619   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:04:05.908738   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:04:05.908788   71929 kubeadm.go:310] 
	I0717 02:04:05.909079   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:04:05.909286   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:04:05.909452   71929 kubeadm.go:394] duration metric: took 7m58.01930975s to StartCluster
	I0717 02:04:05.909455   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:04:05.909494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:04:05.909552   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:04:05.952911   71929 cri.go:89] found id: ""
	I0717 02:04:05.952937   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.952949   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:04:05.952957   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:04:05.953026   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:04:05.988490   71929 cri.go:89] found id: ""
	I0717 02:04:05.988518   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.988529   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:04:05.988537   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:04:05.988593   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:04:06.025228   71929 cri.go:89] found id: ""
	I0717 02:04:06.025259   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.025269   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:04:06.025277   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:04:06.025342   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:04:06.060563   71929 cri.go:89] found id: ""
	I0717 02:04:06.060589   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.060599   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:04:06.060604   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:04:06.060660   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:04:06.095051   71929 cri.go:89] found id: ""
	I0717 02:04:06.095079   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.095091   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:04:06.095099   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:04:06.095150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:04:06.131892   71929 cri.go:89] found id: ""
	I0717 02:04:06.131914   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.131921   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:04:06.131927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:04:06.131973   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:04:06.168893   71929 cri.go:89] found id: ""
	I0717 02:04:06.168919   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.168930   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:04:06.168937   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:04:06.168995   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:04:06.206635   71929 cri.go:89] found id: ""
	I0717 02:04:06.206658   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.206668   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:04:06.206679   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:04:06.206693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:04:06.308601   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:04:06.308624   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:04:06.308637   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:04:06.422081   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:04:06.422116   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:04:06.467466   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:04:06.467496   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:04:06.521420   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:04:06.521457   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0717 02:04:06.535167   71929 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 02:04:06.535211   71929 out.go:239] * 
	* 
	W0717 02:04:06.535263   71929 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.535292   71929 out.go:239] * 
	* 
	W0717 02:04:06.536098   71929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:04:06.539314   71929 out.go:177] 
	W0717 02:04:06.540504   71929 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.540557   71929 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 02:04:06.540579   71929 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 02:04:06.541888   71929 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-901761 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 2 (225.827646ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-901761 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-901761 logs -n 25: (1.622369527s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-894370 sudo cat                              | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo find                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo crio                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-894370                                       | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255698 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | disable-driver-mounts-255698                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:48 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-940222            | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-738184  | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-391501             | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-940222                 | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-901761        | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 02:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-738184       | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-391501                  | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:59 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-391501 --memory=2200                     | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 02:02 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901761             | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:51:47
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:51:47.395737   71929 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:51:47.396000   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396010   71929 out.go:304] Setting ErrFile to fd 2...
	I0717 01:51:47.396016   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396184   71929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:51:47.396684   71929 out.go:298] Setting JSON to false
	I0717 01:51:47.397549   71929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5649,"bootTime":1721175458,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:51:47.397606   71929 start.go:139] virtualization: kvm guest
	I0717 01:51:47.399758   71929 out.go:177] * [old-k8s-version-901761] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:51:47.400960   71929 notify.go:220] Checking for updates...
	I0717 01:51:47.400966   71929 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:51:47.402266   71929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:51:47.403356   71929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:51:47.404532   71929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:51:47.405524   71929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:51:47.406572   71929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:51:47.407935   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:51:47.408358   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.408427   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.422931   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0717 01:51:47.423315   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.423809   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.423831   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.424123   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.424259   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.426227   71929 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 01:51:47.427500   71929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:51:47.427770   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.427801   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.442080   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I0717 01:51:47.442438   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.442901   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.442924   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.443208   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.443382   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.476327   71929 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:51:47.477607   71929 start.go:297] selected driver: kvm2
	I0717 01:51:47.477620   71929 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.477762   71929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:51:47.478432   71929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.478541   71929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:51:47.493611   71929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:51:47.493967   71929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:51:47.494039   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:51:47.494056   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:51:47.494147   71929 start.go:340] cluster config:
	{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.494271   71929 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.496056   71929 out.go:177] * Starting "old-k8s-version-901761" primary control-plane node in "old-k8s-version-901761" cluster
	I0717 01:51:45.178864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:47.497229   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:51:47.497266   71929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:51:47.497279   71929 cache.go:56] Caching tarball of preloaded images
	I0717 01:51:47.497368   71929 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:51:47.497379   71929 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:51:47.497484   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:51:47.497671   71929 start.go:360] acquireMachinesLock for old-k8s-version-901761: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:51:51.258826   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:54.330879   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:00.410811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:03.482811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:09.562828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:12.634828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:18.714910   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:21.786892   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:27.866863   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:30.938805   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:37.022827   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:40.090853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:46.170839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:49.242854   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:55.322824   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:58.394792   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:04.474811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:07.546855   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:13.626861   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:16.698832   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:22.778828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:25.850864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:31.930814   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:35.002842   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:41.082839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:44.154796   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:50.234823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:53.306914   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:59.386835   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:02.458751   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:08.538853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:11.610833   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:17.690816   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:20.762793   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:26.842837   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:29.914866   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:35.994838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:39.066806   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:45.146846   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:48.218841   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:54.298823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:57.370838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:55:00.375050   71522 start.go:364] duration metric: took 3m54.700923144s to acquireMachinesLock for "default-k8s-diff-port-738184"
	I0717 01:55:00.375103   71522 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:00.375110   71522 fix.go:54] fixHost starting: 
	I0717 01:55:00.375500   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:00.375532   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:00.390583   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0717 01:55:00.390957   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:00.391392   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:00.391412   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:00.391704   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:00.391927   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:00.392069   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:00.393467   71522 fix.go:112] recreateIfNeeded on default-k8s-diff-port-738184: state=Stopped err=<nil>
	I0717 01:55:00.393508   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	W0717 01:55:00.393658   71522 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:00.395826   71522 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-738184" ...
	I0717 01:55:00.397256   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Start
	I0717 01:55:00.397401   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring networks are active...
	I0717 01:55:00.398079   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network default is active
	I0717 01:55:00.398390   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network mk-default-k8s-diff-port-738184 is active
	I0717 01:55:00.398710   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Getting domain xml...
	I0717 01:55:00.399275   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Creating domain...
	I0717 01:55:00.372573   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:00.372621   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.372933   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:55:00.372957   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.373131   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:55:00.374934   71146 machine.go:97] duration metric: took 4m37.428393808s to provisionDockerMachine
	I0717 01:55:00.374969   71146 fix.go:56] duration metric: took 4m37.449104762s for fixHost
	I0717 01:55:00.374974   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 4m37.449121677s
	W0717 01:55:00.374996   71146 start.go:714] error starting host: provision: host is not running
	W0717 01:55:00.375080   71146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 01:55:00.375088   71146 start.go:729] Will try again in 5 seconds ...
	I0717 01:55:01.590292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting to get IP...
	I0717 01:55:01.591187   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591589   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591657   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.591578   72583 retry.go:31] will retry after 266.165899ms: waiting for machine to come up
	I0717 01:55:01.859307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859724   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859751   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.859695   72583 retry.go:31] will retry after 282.941451ms: waiting for machine to come up
	I0717 01:55:02.144389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144756   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144787   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.144701   72583 retry.go:31] will retry after 327.203414ms: waiting for machine to come up
	I0717 01:55:02.473217   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473681   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473705   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.473606   72583 retry.go:31] will retry after 553.917043ms: waiting for machine to come up
	I0717 01:55:03.029379   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029762   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.029738   72583 retry.go:31] will retry after 617.312209ms: waiting for machine to come up
	I0717 01:55:03.648372   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648701   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648733   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.648670   72583 retry.go:31] will retry after 641.28503ms: waiting for machine to come up
	I0717 01:55:04.291493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.291986   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.292019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:04.291870   72583 retry.go:31] will retry after 1.133455116s: waiting for machine to come up
	I0717 01:55:05.426672   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426943   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426972   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:05.426892   72583 retry.go:31] will retry after 1.00384113s: waiting for machine to come up
	I0717 01:55:05.376907   71146 start.go:360] acquireMachinesLock for embed-certs-940222: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:55:06.432146   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432502   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432525   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:06.432477   72583 retry.go:31] will retry after 1.472142907s: waiting for machine to come up
	I0717 01:55:07.906974   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907407   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907437   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:07.907336   72583 retry.go:31] will retry after 1.775986179s: waiting for machine to come up
	I0717 01:55:09.685396   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685792   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685822   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:09.685756   72583 retry.go:31] will retry after 2.663700716s: waiting for machine to come up
	I0717 01:55:12.351616   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.351985   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.352017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:12.351921   72583 retry.go:31] will retry after 2.409004894s: waiting for machine to come up
	I0717 01:55:14.763493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763859   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763876   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:14.763828   72583 retry.go:31] will retry after 3.049843419s: waiting for machine to come up
	I0717 01:55:19.031713   71603 start.go:364] duration metric: took 4m8.751453112s to acquireMachinesLock for "no-preload-391501"
	I0717 01:55:19.031779   71603 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:19.031787   71603 fix.go:54] fixHost starting: 
	I0717 01:55:19.032306   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:19.032352   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:19.049376   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0717 01:55:19.049877   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:19.050387   71603 main.go:141] libmachine: Using API Version  1
	I0717 01:55:19.050409   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:19.050752   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:19.050935   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:19.051104   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 01:55:19.052805   71603 fix.go:112] recreateIfNeeded on no-preload-391501: state=Stopped err=<nil>
	I0717 01:55:19.052832   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	W0717 01:55:19.052989   71603 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:19.056667   71603 out.go:177] * Restarting existing kvm2 VM for "no-preload-391501" ...
	I0717 01:55:19.058078   71603 main.go:141] libmachine: (no-preload-391501) Calling .Start
	I0717 01:55:19.058314   71603 main.go:141] libmachine: (no-preload-391501) Ensuring networks are active...
	I0717 01:55:19.059126   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network default is active
	I0717 01:55:19.059466   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network mk-no-preload-391501 is active
	I0717 01:55:19.059958   71603 main.go:141] libmachine: (no-preload-391501) Getting domain xml...
	I0717 01:55:19.060746   71603 main.go:141] libmachine: (no-preload-391501) Creating domain...
	I0717 01:55:17.816307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.816746   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Found IP for machine: 192.168.39.170
	I0717 01:55:17.816765   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserving static IP address...
	I0717 01:55:17.816776   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has current primary IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.817337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserved static IP address: 192.168.39.170
	I0717 01:55:17.817366   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for SSH to be available...
	I0717 01:55:17.817389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.817420   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | skip adding static IP to network mk-default-k8s-diff-port-738184 - found existing host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"}
	I0717 01:55:17.817443   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Getting to WaitForSSH function...
	I0717 01:55:17.819693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820022   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.820056   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820171   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH client type: external
	I0717 01:55:17.820203   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa (-rw-------)
	I0717 01:55:17.820245   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:17.820259   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | About to run SSH command:
	I0717 01:55:17.820280   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | exit 0
	I0717 01:55:17.942987   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:17.943370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetConfigRaw
	I0717 01:55:17.943945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:17.946638   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.946993   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.947021   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.947268   71522 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/config.json ...
	I0717 01:55:17.947479   71522 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:17.947497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:17.947732   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:17.950032   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.950397   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950489   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:17.950664   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950827   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950959   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:17.951108   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:17.951300   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:17.951311   71522 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:18.051147   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:18.051180   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051421   71522 buildroot.go:166] provisioning hostname "default-k8s-diff-port-738184"
	I0717 01:55:18.051456   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051655   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.054480   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055024   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.055053   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055262   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.055473   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055643   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.055928   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.056077   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.056089   71522 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-738184 && echo "default-k8s-diff-port-738184" | sudo tee /etc/hostname
	I0717 01:55:18.170268   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-738184
	
	I0717 01:55:18.170299   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.173037   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.173369   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173485   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.173673   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173851   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173957   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.174110   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.174322   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.174349   71522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-738184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-738184/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-738184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:18.279963   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:18.279997   71522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:18.280030   71522 buildroot.go:174] setting up certificates
	I0717 01:55:18.280042   71522 provision.go:84] configureAuth start
	I0717 01:55:18.280054   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.280393   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:18.282887   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283201   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.283231   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.285399   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.285691   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285795   71522 provision.go:143] copyHostCerts
	I0717 01:55:18.285865   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:18.285884   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:18.285971   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:18.286084   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:18.286095   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:18.286129   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:18.286205   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:18.286214   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:18.286247   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:18.286313   71522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-738184 san=[127.0.0.1 192.168.39.170 default-k8s-diff-port-738184 localhost minikube]
	I0717 01:55:18.386547   71522 provision.go:177] copyRemoteCerts
	I0717 01:55:18.386627   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:18.386658   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.388930   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.389322   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389465   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.389662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.389804   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.389944   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.469031   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:18.493607   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 01:55:18.517024   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:18.539757   71522 provision.go:87] duration metric: took 259.702663ms to configureAuth
	I0717 01:55:18.539793   71522 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:18.540064   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:18.540178   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.542831   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543174   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.543196   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543388   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.543599   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.543843   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.544011   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.544172   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.544343   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.544362   71522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:18.804633   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:18.804690   71522 machine.go:97] duration metric: took 857.197634ms to provisionDockerMachine
	I0717 01:55:18.804706   71522 start.go:293] postStartSetup for "default-k8s-diff-port-738184" (driver="kvm2")
	I0717 01:55:18.804720   71522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:18.804743   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:18.805049   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:18.805073   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.807835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808127   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.808147   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808319   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.808497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.808670   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.808823   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.889297   71522 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:18.893587   71522 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:18.893615   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:18.893694   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:18.893779   71522 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:18.893886   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:18.903319   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:18.927700   71522 start.go:296] duration metric: took 122.979492ms for postStartSetup
	I0717 01:55:18.927748   71522 fix.go:56] duration metric: took 18.552636525s for fixHost
	I0717 01:55:18.927775   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.930483   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.930768   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.930791   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.931004   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.931192   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931511   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.931677   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.931873   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.931887   71522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:19.031515   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181319.004563133
	
	I0717 01:55:19.031541   71522 fix.go:216] guest clock: 1721181319.004563133
	I0717 01:55:19.031552   71522 fix.go:229] Guest: 2024-07-17 01:55:19.004563133 +0000 UTC Remote: 2024-07-17 01:55:18.927754613 +0000 UTC m=+253.390645105 (delta=76.80852ms)
	I0717 01:55:19.031611   71522 fix.go:200] guest clock delta is within tolerance: 76.80852ms
	I0717 01:55:19.031623   71522 start.go:83] releasing machines lock for "default-k8s-diff-port-738184", held for 18.656540342s
	I0717 01:55:19.031661   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.031940   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:19.034537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.034881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.034911   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.035036   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035557   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035750   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035822   71522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:19.035875   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.036000   71522 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:19.036027   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.038509   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038860   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.038892   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038935   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038982   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039156   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039328   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.039361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.039389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.039488   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.039537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039702   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.040047   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.140208   71522 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:19.146454   71522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:19.293584   71522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:19.300750   71522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:19.300817   71522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:19.321596   71522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:19.321621   71522 start.go:495] detecting cgroup driver to use...
	I0717 01:55:19.321684   71522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:19.337664   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:19.351856   71522 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:19.351922   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:19.366355   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:19.380735   71522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:19.495916   71522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:19.646426   71522 docker.go:233] disabling docker service ...
	I0717 01:55:19.646501   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:19.665764   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:19.683893   71522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:19.814704   71522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:19.958389   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:19.973223   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:19.992869   71522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:55:19.992937   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.003696   71522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:20.003762   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.014415   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.025303   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.036715   71522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:20.047872   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.059666   71522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.079479   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.092424   71522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:20.103225   71522 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:20.103284   71522 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:20.120620   71522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:20.136439   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:20.284796   71522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:20.427605   71522 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:20.427698   71522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:20.433477   71522 start.go:563] Will wait 60s for crictl version
	I0717 01:55:20.433537   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:55:20.437399   71522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:20.479192   71522 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:20.479289   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.507655   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.537084   71522 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:55:20.538435   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:20.541200   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:20.541531   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541772   71522 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:20.546261   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:20.559802   71522 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:20.559946   71522 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:55:20.560001   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:20.381503   71603 main.go:141] libmachine: (no-preload-391501) Waiting to get IP...
	I0717 01:55:20.382632   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.383105   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.383210   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.383077   72724 retry.go:31] will retry after 193.198351ms: waiting for machine to come up
	I0717 01:55:20.577611   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.578117   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.578145   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.578067   72724 retry.go:31] will retry after 254.406992ms: waiting for machine to come up
	I0717 01:55:20.834633   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.835088   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.835116   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.835057   72724 retry.go:31] will retry after 459.446617ms: waiting for machine to come up
	I0717 01:55:21.295939   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.296384   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.296409   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.296343   72724 retry.go:31] will retry after 515.654185ms: waiting for machine to come up
	I0717 01:55:21.813613   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.814140   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.814178   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.814104   72724 retry.go:31] will retry after 652.322198ms: waiting for machine to come up
	I0717 01:55:22.468223   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:22.468858   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:22.468897   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:22.468774   72724 retry.go:31] will retry after 767.220835ms: waiting for machine to come up
	I0717 01:55:23.237341   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:23.237685   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:23.237716   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:23.237633   72724 retry.go:31] will retry after 1.083873631s: waiting for machine to come up
	I0717 01:55:24.323463   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:24.323983   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:24.324011   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:24.323934   72724 retry.go:31] will retry after 1.255667305s: waiting for machine to come up
	I0717 01:55:20.597329   71522 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:55:20.597409   71522 ssh_runner.go:195] Run: which lz4
	I0717 01:55:20.602100   71522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:55:20.606863   71522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:55:20.606900   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:55:22.053002   71522 crio.go:462] duration metric: took 1.450939378s to copy over tarball
	I0717 01:55:22.053071   71522 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:24.356349   71522 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.303245698s)
	I0717 01:55:24.356378   71522 crio.go:469] duration metric: took 2.303353381s to extract the tarball
	I0717 01:55:24.356385   71522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:55:24.402866   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:24.446681   71522 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:55:24.446709   71522 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:55:24.446720   71522 kubeadm.go:934] updating node { 192.168.39.170 8444 v1.30.2 crio true true} ...
	I0717 01:55:24.446844   71522 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-738184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:24.446931   71522 ssh_runner.go:195] Run: crio config
	I0717 01:55:24.499717   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:24.499744   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:24.499759   71522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:24.499787   71522 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-738184 NodeName:default-k8s-diff-port-738184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:24.499965   71522 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-738184"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:24.500039   71522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:55:24.510488   71522 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:24.510568   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:24.520830   71522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 01:55:24.538018   71522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:55:24.556287   71522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 01:55:24.574973   71522 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:24.579058   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:24.591752   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:24.712285   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:24.729387   71522 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184 for IP: 192.168.39.170
	I0717 01:55:24.729411   71522 certs.go:194] generating shared ca certs ...
	I0717 01:55:24.729432   71522 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:24.729596   71522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:24.729650   71522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:24.729662   71522 certs.go:256] generating profile certs ...
	I0717 01:55:24.729776   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/client.key
	I0717 01:55:24.729847   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key.44902a6f
	I0717 01:55:24.729907   71522 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key
	I0717 01:55:24.730044   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:24.730086   71522 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:24.730099   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:24.730135   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:24.730183   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:24.730222   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:24.730277   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:24.731142   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:24.762240   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:24.788746   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:24.825379   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:24.853821   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 01:55:24.887105   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:55:24.910834   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:24.934566   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:55:24.959709   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:24.983722   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:25.007312   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:25.031576   71522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:25.049348   71522 ssh_runner.go:195] Run: openssl version
	I0717 01:55:25.055410   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:25.066104   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070616   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070675   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.076604   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:25.087284   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:25.098383   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103262   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103331   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.109170   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:25.119940   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:25.130829   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135659   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135734   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.141583   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:25.152770   71522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:25.157395   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:25.163543   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:25.169580   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:25.175754   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:25.181771   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:25.187935   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:25.193614   71522 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:25.193727   71522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:25.193770   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.230871   71522 cri.go:89] found id: ""
	I0717 01:55:25.230954   71522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:25.241336   71522 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:25.241357   71522 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:25.241410   71522 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:25.251637   71522 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:25.253030   71522 kubeconfig.go:125] found "default-k8s-diff-port-738184" server: "https://192.168.39.170:8444"
	I0717 01:55:25.255926   71522 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:25.265878   71522 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.170
	I0717 01:55:25.265915   71522 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:25.265927   71522 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:25.265982   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.305929   71522 cri.go:89] found id: ""
	I0717 01:55:25.306015   71522 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:25.322581   71522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:25.332334   71522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:25.332356   71522 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:25.332407   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:55:25.342132   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:25.342193   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:25.351628   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:55:25.360765   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:25.360833   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:25.370167   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.379057   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:25.379124   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.389470   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:55:25.399142   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:25.399210   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:25.409452   71522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:25.421509   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.545698   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.580838   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:25.581295   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:25.581322   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:25.581247   72724 retry.go:31] will retry after 1.354947672s: waiting for machine to come up
	I0717 01:55:26.937260   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:26.937746   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:26.937774   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:26.937696   72724 retry.go:31] will retry after 1.818074273s: waiting for machine to come up
	I0717 01:55:28.758015   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:28.758489   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:28.758517   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:28.758449   72724 retry.go:31] will retry after 2.782465023s: waiting for machine to come up
	I0717 01:55:26.599380   71522 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053644988s)
	I0717 01:55:26.599416   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.807765   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.878767   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.965940   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:26.966023   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.466587   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.966138   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.983649   71522 api_server.go:72] duration metric: took 1.017709312s to wait for apiserver process to appear ...
	I0717 01:55:27.983678   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:55:27.983701   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:27.984214   71522 api_server.go:269] stopped: https://192.168.39.170:8444/healthz: Get "https://192.168.39.170:8444/healthz": dial tcp 192.168.39.170:8444: connect: connection refused
	I0717 01:55:28.483780   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.862416   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.862464   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.862479   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.869667   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.869718   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.983899   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.988670   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:30.988704   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.484233   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.488939   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:31.488978   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.984611   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.988738   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:55:31.996182   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:55:31.996207   71522 api_server.go:131] duration metric: took 4.012523131s to wait for apiserver health ...
	I0717 01:55:31.996216   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:31.996222   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:31.998122   71522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:55:31.999536   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:55:32.010501   71522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:55:32.030227   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:55:32.039923   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:55:32.039954   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039988   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039998   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:55:32.040003   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:55:32.040013   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:55:32.040020   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:55:32.040033   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:55:32.040041   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:55:32.040046   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:55:32.040053   71522 system_pods.go:74] duration metric: took 9.802793ms to wait for pod list to return data ...
	I0717 01:55:32.040060   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:55:32.043233   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:55:32.043259   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:55:32.043270   71522 node_conditions.go:105] duration metric: took 3.202451ms to run NodePressure ...
	I0717 01:55:32.043285   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:32.350948   71522 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356119   71522 kubeadm.go:739] kubelet initialised
	I0717 01:55:32.356143   71522 kubeadm.go:740] duration metric: took 5.164025ms waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356153   71522 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:32.361501   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.366747   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366770   71522 pod_ready.go:81] duration metric: took 5.246954ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.366778   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366785   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.371049   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371066   71522 pod_ready.go:81] duration metric: took 4.275157ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.371073   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371078   71522 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.375338   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375361   71522 pod_ready.go:81] duration metric: took 4.27092ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.375369   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375379   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.434545   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434583   71522 pod_ready.go:81] duration metric: took 59.196717ms for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.434593   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434601   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.836139   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836178   71522 pod_ready.go:81] duration metric: took 401.568097ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.836194   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836212   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.234032   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234060   71522 pod_ready.go:81] duration metric: took 397.83937ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.234071   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234076   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.633953   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633981   71522 pod_ready.go:81] duration metric: took 399.893316ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.633992   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633998   71522 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:34.034511   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034560   71522 pod_ready.go:81] duration metric: took 400.544281ms for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:34.034574   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034583   71522 pod_ready.go:38] duration metric: took 1.678420144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:34.034599   71522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:55:34.049235   71522 ops.go:34] apiserver oom_adj: -16
	I0717 01:55:34.049261   71522 kubeadm.go:597] duration metric: took 8.807897214s to restartPrimaryControlPlane
	I0717 01:55:34.049272   71522 kubeadm.go:394] duration metric: took 8.855664434s to StartCluster
	I0717 01:55:34.049292   71522 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.049374   71522 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:55:34.050992   71522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.051239   71522 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:55:34.051307   71522 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:55:34.051409   71522 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051454   71522 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051465   71522 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:55:34.051497   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051511   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:34.051498   71522 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051502   71522 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051564   71522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-738184"
	I0717 01:55:34.051587   71522 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051612   71522 addons.go:243] addon metrics-server should already be in state true
	I0717 01:55:34.051686   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051803   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.051845   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052097   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052151   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052331   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052383   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.054788   71522 out.go:177] * Verifying Kubernetes components...
	I0717 01:55:34.056293   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0717 01:55:34.067821   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.067911   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068370   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068390   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068515   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068526   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0717 01:55:34.068535   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068709   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.068991   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068997   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.069278   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069320   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069529   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069560   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069611   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.069629   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.069977   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.070184   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.074013   71522 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.074036   71522 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:55:34.074062   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.074422   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.074463   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.085256   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0717 01:55:34.085694   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0717 01:55:34.085716   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086207   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086378   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086402   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.086785   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.086945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.086947   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086999   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.087327   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.087624   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.088695   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.089320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.090932   71522 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:55:34.090932   71522 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:31.543587   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:31.544073   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:31.544102   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:31.544012   72724 retry.go:31] will retry after 2.898539616s: waiting for machine to come up
	I0717 01:55:34.444315   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:34.444828   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:34.444870   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:34.444790   72724 retry.go:31] will retry after 4.252719028s: waiting for machine to come up
	I0717 01:55:34.092892   71522 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.092910   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:55:34.092926   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.092985   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:55:34.092993   71522 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:55:34.093003   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.095340   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0717 01:55:34.095840   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.096397   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.096434   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.096567   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.096819   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.096979   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097029   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097058   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097498   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.097536   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.097881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097897   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097899   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097923   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.098075   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098105   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098286   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098449   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.098461   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.113190   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0717 01:55:34.113544   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.114033   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.114059   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.114375   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.114575   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.116332   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.116544   71522 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.116563   71522 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:55:34.116583   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.119693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.119992   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.120017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.120457   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.120722   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.120965   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.121652   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.247964   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:34.266521   71522 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:34.370296   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:55:34.370318   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:55:34.380102   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.394620   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:55:34.394639   71522 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:55:34.409328   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.416653   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:34.416684   71522 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:55:34.445296   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:35.605781   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196419762s)
	I0717 01:55:35.605843   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605858   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605854   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160520147s)
	I0717 01:55:35.605778   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.225640358s)
	I0717 01:55:35.605929   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605944   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605988   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606007   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606293   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606300   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606309   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606315   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606319   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606329   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606333   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606349   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606357   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606371   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606398   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606410   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606424   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606640   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607811   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607852   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607866   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607874   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607892   71522 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-738184"
	I0717 01:55:35.607815   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607878   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607829   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607959   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607842   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.613691   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.613717   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.614019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.614025   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.614081   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.615871   71522 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0717 01:55:38.700025   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.700533   71603 main.go:141] libmachine: (no-preload-391501) Found IP for machine: 192.168.61.174
	I0717 01:55:38.700555   71603 main.go:141] libmachine: (no-preload-391501) Reserving static IP address...
	I0717 01:55:38.700572   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has current primary IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.701013   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.701033   71603 main.go:141] libmachine: (no-preload-391501) Reserved static IP address: 192.168.61.174
	I0717 01:55:38.701049   71603 main.go:141] libmachine: (no-preload-391501) DBG | skip adding static IP to network mk-no-preload-391501 - found existing host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"}
	I0717 01:55:38.701064   71603 main.go:141] libmachine: (no-preload-391501) DBG | Getting to WaitForSSH function...
	I0717 01:55:38.701080   71603 main.go:141] libmachine: (no-preload-391501) Waiting for SSH to be available...
	I0717 01:55:38.703218   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703577   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.703605   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703755   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH client type: external
	I0717 01:55:38.703773   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa (-rw-------)
	I0717 01:55:38.703791   71603 main.go:141] libmachine: (no-preload-391501) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:38.703809   71603 main.go:141] libmachine: (no-preload-391501) DBG | About to run SSH command:
	I0717 01:55:38.703817   71603 main.go:141] libmachine: (no-preload-391501) DBG | exit 0
	I0717 01:55:38.827046   71603 main.go:141] libmachine: (no-preload-391501) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:38.827413   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetConfigRaw
	I0717 01:55:38.828102   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:38.831229   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.831782   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.831814   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.832140   71603 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/config.json ...
	I0717 01:55:38.832347   71603 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:38.832367   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:38.832574   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.835302   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835710   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.835735   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835954   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.836173   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836345   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836521   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.836691   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.836928   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.836947   71603 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:38.943173   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:38.943213   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943491   71603 buildroot.go:166] provisioning hostname "no-preload-391501"
	I0717 01:55:38.943513   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943725   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.946396   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946872   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.946900   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946980   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.947164   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947339   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947518   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.947695   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.947849   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.947869   71603 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-391501 && echo "no-preload-391501" | sudo tee /etc/hostname
	I0717 01:55:39.070382   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-391501
	
	I0717 01:55:39.070429   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.073539   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.073904   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.073941   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.074203   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.074426   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074624   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.075132   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.075348   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.075373   71603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-391501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-391501/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-391501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:39.195604   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:39.195634   71603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:39.195649   71603 buildroot.go:174] setting up certificates
	I0717 01:55:39.195656   71603 provision.go:84] configureAuth start
	I0717 01:55:39.195665   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:39.195952   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:39.198409   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198792   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.198822   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198996   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.201509   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.201870   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.201901   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.202078   71603 provision.go:143] copyHostCerts
	I0717 01:55:39.202153   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:39.202166   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:39.202221   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:39.202313   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:39.202320   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:39.202339   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:39.202387   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:39.202394   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:39.202410   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:39.202456   71603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.no-preload-391501 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-391501]
	I0717 01:55:39.550166   71603 provision.go:177] copyRemoteCerts
	I0717 01:55:39.550224   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:39.550249   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.552616   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.552990   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.553020   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.553135   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.553298   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.553460   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.553559   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:39.638467   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:39.664166   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:55:39.689416   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:55:39.714130   71603 provision.go:87] duration metric: took 518.463378ms to configureAuth
	I0717 01:55:39.714159   71603 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:39.714362   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:55:39.714440   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.717269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717694   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.717722   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.718080   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718240   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718393   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.718621   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.718793   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.718809   71603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:39.982066   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:39.982095   71603 machine.go:97] duration metric: took 1.149734372s to provisionDockerMachine
	I0717 01:55:39.982110   71603 start.go:293] postStartSetup for "no-preload-391501" (driver="kvm2")
	I0717 01:55:39.982127   71603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:39.982147   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:39.982429   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:39.982445   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.984935   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985232   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.985269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985372   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.985553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.985793   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.986010   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.074439   71603 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:40.079515   71603 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:40.079541   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:40.079617   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:40.079708   71603 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:40.079831   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:40.090783   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:40.121212   71603 start.go:296] duration metric: took 139.087761ms for postStartSetup
	I0717 01:55:40.121257   71603 fix.go:56] duration metric: took 21.089468917s for fixHost
	I0717 01:55:40.121281   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.124208   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124517   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.124545   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124753   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.124940   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125119   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125269   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.125430   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:40.125626   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:40.125638   71603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:40.239538   71929 start.go:364] duration metric: took 3m52.741834986s to acquireMachinesLock for "old-k8s-version-901761"
	I0717 01:55:40.239610   71929 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:40.239618   71929 fix.go:54] fixHost starting: 
	I0717 01:55:40.240021   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:40.240054   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:40.257464   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0717 01:55:40.257866   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:40.258287   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:55:40.258311   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:40.258672   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:40.258871   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:40.259041   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetState
	I0717 01:55:40.260529   71929 fix.go:112] recreateIfNeeded on old-k8s-version-901761: state=Stopped err=<nil>
	I0717 01:55:40.260568   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	W0717 01:55:40.260721   71929 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:40.262590   71929 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-901761" ...
	I0717 01:55:35.617123   71522 addons.go:510] duration metric: took 1.565817066s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0717 01:55:36.270109   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:38.270489   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.270966   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.239384   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181340.205508074
	
	I0717 01:55:40.239409   71603 fix.go:216] guest clock: 1721181340.205508074
	I0717 01:55:40.239419   71603 fix.go:229] Guest: 2024-07-17 01:55:40.205508074 +0000 UTC Remote: 2024-07-17 01:55:40.121261572 +0000 UTC m=+269.976034747 (delta=84.246502ms)
	I0717 01:55:40.239445   71603 fix.go:200] guest clock delta is within tolerance: 84.246502ms
	I0717 01:55:40.239453   71603 start.go:83] releasing machines lock for "no-preload-391501", held for 21.207695176s
	I0717 01:55:40.239486   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.239768   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:40.242534   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.242923   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.242956   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.243159   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243649   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243826   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243924   71603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:40.243975   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.244045   71603 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:40.244071   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.246599   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.246958   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.246984   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247089   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247153   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247254   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.247401   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.247486   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.247510   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247579   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.247669   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247861   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.248031   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.248169   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.328497   71603 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:40.350092   71603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:40.497644   71603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:40.504094   71603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:40.504164   71603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:40.526752   71603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:40.526777   71603 start.go:495] detecting cgroup driver to use...
	I0717 01:55:40.526842   71603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:40.543537   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:40.557551   71603 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:40.557606   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:40.571755   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:40.585548   71603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:40.702991   71603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:40.849192   71603 docker.go:233] disabling docker service ...
	I0717 01:55:40.849276   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:40.864697   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:40.877940   71603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:41.043588   71603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:41.175359   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:41.191170   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:41.212440   71603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:55:41.212508   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.224335   71603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:41.224411   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.235721   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.247575   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.260018   71603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:41.271526   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.285999   71603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.307653   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.319272   71603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:41.330544   71603 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:41.330637   71603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:41.346698   71603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:41.361983   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:41.490052   71603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:41.639509   71603 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:41.639626   71603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:41.646714   71603 start.go:563] Will wait 60s for crictl version
	I0717 01:55:41.646793   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.650900   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:41.688112   71603 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:41.688188   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.717335   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.750767   71603 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:55:40.263857   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .Start
	I0717 01:55:40.264019   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring networks are active...
	I0717 01:55:40.264709   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network default is active
	I0717 01:55:40.265165   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network mk-old-k8s-version-901761 is active
	I0717 01:55:40.265581   71929 main.go:141] libmachine: (old-k8s-version-901761) Getting domain xml...
	I0717 01:55:40.266340   71929 main.go:141] libmachine: (old-k8s-version-901761) Creating domain...
	I0717 01:55:41.562582   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting to get IP...
	I0717 01:55:41.563329   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.563802   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.563890   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.563781   72905 retry.go:31] will retry after 216.264296ms: waiting for machine to come up
	I0717 01:55:41.781168   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.781662   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.781690   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.781629   72905 retry.go:31] will retry after 275.269814ms: waiting for machine to come up
	I0717 01:55:42.058127   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.058525   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.058564   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.058498   72905 retry.go:31] will retry after 348.024497ms: waiting for machine to come up
	I0717 01:55:41.752123   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:41.755114   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755571   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:41.755602   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755863   71603 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:41.760869   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:41.775414   71603 kubeadm.go:883] updating cluster {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:41.775563   71603 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:55:41.775609   71603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:41.815115   71603 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 01:55:41.815141   71603 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.815241   71603 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.815279   71603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.815290   71603 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.815304   71603 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:55:41.815239   71603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.815258   71603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817894   71603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.817939   71603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817892   71603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.817888   71603 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:55:41.818033   71603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.817891   71603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.817900   71603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.817978   71603 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.014545   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 01:55:42.030064   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.034517   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.123584   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.130122   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.134935   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.136170   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.173650   71603 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 01:55:42.173707   71603 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.173718   71603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 01:55:42.173755   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.173767   71603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.173820   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.219689   71603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 01:55:42.219745   71603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.219792   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.240802   71603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 01:55:42.240847   71603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.240907   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.251152   71603 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 01:55:42.251189   71603 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.251225   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254790   71603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 01:55:42.254849   71603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.254886   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.254895   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254916   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.254951   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.255006   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.257984   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.267440   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.395407   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395471   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395513   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395522   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395558   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395582   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:55:42.395592   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395663   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:42.397740   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:55:42.397813   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:42.420577   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420602   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420619   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420640   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420662   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420676   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420705   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 01:55:42.420711   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420738   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 01:55:43.737662   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581683   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.160996964s)
	I0717 01:55:44.581730   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 01:55:44.581753   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581754   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.161058602s)
	I0717 01:55:44.581788   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 01:55:44.581810   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581858   71603 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 01:55:44.581900   71603 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581928   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.270830   71522 node_ready.go:49] node "default-k8s-diff-port-738184" has status "Ready":"True"
	I0717 01:55:41.270853   71522 node_ready.go:38] duration metric: took 7.004304151s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:41.270868   71522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:41.278587   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285210   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.285236   71522 pod_ready.go:81] duration metric: took 6.623347ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285250   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291110   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.291133   71522 pod_ready.go:81] duration metric: took 5.874809ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291145   71522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297614   71522 pod_ready.go:92] pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.297636   71522 pod_ready.go:81] duration metric: took 6.483783ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297645   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305307   71522 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.305335   71522 pod_ready.go:81] duration metric: took 1.007681338s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305349   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472190   71522 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.472222   71522 pod_ready.go:81] duration metric: took 166.864153ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472236   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871756   71522 pod_ready.go:92] pod "kube-proxy-c4n94" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.871780   71522 pod_ready.go:81] duration metric: took 399.536375ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871789   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272858   71522 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:43.272895   71522 pod_ready.go:81] duration metric: took 401.098971ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272913   71522 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:45.281019   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:42.407813   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.408311   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.408346   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.408218   72905 retry.go:31] will retry after 388.717436ms: waiting for machine to come up
	I0717 01:55:42.798810   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.799378   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.799411   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.799323   72905 retry.go:31] will retry after 661.391346ms: waiting for machine to come up
	I0717 01:55:43.462189   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:43.462654   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:43.462686   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:43.462603   72905 retry.go:31] will retry after 636.142497ms: waiting for machine to come up
	I0717 01:55:44.100416   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.100852   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.100874   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.100808   72905 retry.go:31] will retry after 781.652918ms: waiting for machine to come up
	I0717 01:55:44.883650   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.884137   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.884170   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.884088   72905 retry.go:31] will retry after 1.238608293s: waiting for machine to come up
	I0717 01:55:46.124419   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:46.124911   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:46.124942   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:46.124854   72905 retry.go:31] will retry after 1.169011508s: waiting for machine to come up
	I0717 01:55:47.295202   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:47.295679   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:47.295715   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:47.295632   72905 retry.go:31] will retry after 1.723987128s: waiting for machine to come up
	I0717 01:55:47.004929   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.423090292s)
	I0717 01:55:47.004968   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 01:55:47.004990   71603 ssh_runner.go:235] Completed: which crictl: (2.423045276s)
	I0717 01:55:47.005021   71603 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:47.005053   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:47.005067   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:49.097703   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.092610651s)
	I0717 01:55:49.097747   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 01:55:49.097776   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097836   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097776   71603 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.092700925s)
	I0717 01:55:49.097953   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 01:55:49.098050   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:47.781233   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.786039   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.020883   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:49.021363   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:49.021396   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:49.021279   72905 retry.go:31] will retry after 2.098481296s: waiting for machine to come up
	I0717 01:55:51.121693   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:51.122253   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:51.122282   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:51.122192   72905 retry.go:31] will retry after 2.624839429s: waiting for machine to come up
	I0717 01:55:50.560197   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.462322087s)
	I0717 01:55:50.560292   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 01:55:50.560323   71603 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560252   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.462175943s)
	I0717 01:55:50.560373   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560388   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 01:55:53.630471   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.070071936s)
	I0717 01:55:53.630509   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 01:55:53.630529   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:53.630604   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:52.280585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:54.779606   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:53.748796   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:53.749348   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:53.749390   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:53.749298   72905 retry.go:31] will retry after 3.47930356s: waiting for machine to come up
	I0717 01:55:57.231901   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232407   71929 main.go:141] libmachine: (old-k8s-version-901761) Found IP for machine: 192.168.50.44
	I0717 01:55:57.232437   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has current primary IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232449   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserving static IP address...
	I0717 01:55:57.232880   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserved static IP address: 192.168.50.44
	I0717 01:55:57.232928   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.232937   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting for SSH to be available...
	I0717 01:55:57.232952   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | skip adding static IP to network mk-old-k8s-version-901761 - found existing host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"}
	I0717 01:55:57.232971   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Getting to WaitForSSH function...
	I0717 01:55:57.235007   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235208   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.235242   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235421   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH client type: external
	I0717 01:55:57.235461   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa (-rw-------)
	I0717 01:55:57.235502   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:57.235516   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | About to run SSH command:
	I0717 01:55:57.235530   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | exit 0
	I0717 01:55:57.362619   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:57.363106   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetConfigRaw
	I0717 01:55:57.363760   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.366213   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366636   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.366666   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366958   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:55:57.367165   71929 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:57.367188   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:57.367392   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.370017   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370354   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.370371   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370577   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.370765   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.370935   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.371084   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.371325   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.371506   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.371518   71929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:58.531714   71146 start.go:364] duration metric: took 53.154741813s to acquireMachinesLock for "embed-certs-940222"
	I0717 01:55:58.531773   71146 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:58.531784   71146 fix.go:54] fixHost starting: 
	I0717 01:55:58.532189   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:58.532237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:58.549026   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0717 01:55:58.549491   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:58.550001   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:55:58.550025   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:58.550363   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:58.550536   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:55:58.550707   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:55:58.552236   71146 fix.go:112] recreateIfNeeded on embed-certs-940222: state=Stopped err=<nil>
	I0717 01:55:58.552259   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	W0717 01:55:58.552397   71146 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:58.554487   71146 out.go:177] * Restarting existing kvm2 VM for "embed-certs-940222" ...
	I0717 01:55:57.478893   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:57.478921   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479123   71929 buildroot.go:166] provisioning hostname "old-k8s-version-901761"
	I0717 01:55:57.479142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479330   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.482163   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482531   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.482579   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482739   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.482937   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483111   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483264   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.483454   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.483632   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.483648   71929 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-901761 && echo "old-k8s-version-901761" | sudo tee /etc/hostname
	I0717 01:55:57.613409   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-901761
	
	I0717 01:55:57.613440   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.616228   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616614   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.616655   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616860   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.617040   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617222   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617383   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.617574   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.617778   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.617794   71929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-901761' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-901761/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-901761' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:57.737648   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:57.737683   71929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:57.737703   71929 buildroot.go:174] setting up certificates
	I0717 01:55:57.737711   71929 provision.go:84] configureAuth start
	I0717 01:55:57.737721   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.738028   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.741089   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741532   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.741556   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741741   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.744444   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.744917   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.744947   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.745111   71929 provision.go:143] copyHostCerts
	I0717 01:55:57.745185   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:57.745202   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:57.745273   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:57.745393   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:57.745405   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:57.745437   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:57.745517   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:57.745527   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:57.745545   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:57.745602   71929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-901761 san=[127.0.0.1 192.168.50.44 localhost minikube old-k8s-version-901761]
	I0717 01:55:57.830872   71929 provision.go:177] copyRemoteCerts
	I0717 01:55:57.830939   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:57.830972   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.833463   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833741   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.833777   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833887   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.834083   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.834250   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.834403   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:57.918346   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:57.954250   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:57.979770   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:55:58.005161   71929 provision.go:87] duration metric: took 267.436975ms to configureAuth
	I0717 01:55:58.005193   71929 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:58.005412   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:55:58.005493   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.008255   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008626   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.008663   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008833   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.009006   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009170   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009298   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.009464   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.009616   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.009639   71929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:58.281081   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:58.281112   71929 machine.go:97] duration metric: took 913.933405ms to provisionDockerMachine
	I0717 01:55:58.281121   71929 start.go:293] postStartSetup for "old-k8s-version-901761" (driver="kvm2")
	I0717 01:55:58.281130   71929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:58.281144   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.281497   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:58.281533   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.284465   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.284812   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.284840   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.285023   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.285207   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.285441   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.285650   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.377149   71929 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:58.381709   71929 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:58.381731   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:58.381798   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:58.381887   71929 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:58.381972   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:58.392916   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:58.420677   71929 start.go:296] duration metric: took 139.542186ms for postStartSetup
	I0717 01:55:58.420721   71929 fix.go:56] duration metric: took 18.181102939s for fixHost
	I0717 01:55:58.420745   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.423582   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.423961   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.423989   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.424169   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.424372   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424557   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424693   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.424859   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.425040   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.425053   71929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:58.531563   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181358.508735025
	
	I0717 01:55:58.531585   71929 fix.go:216] guest clock: 1721181358.508735025
	I0717 01:55:58.531594   71929 fix.go:229] Guest: 2024-07-17 01:55:58.508735025 +0000 UTC Remote: 2024-07-17 01:55:58.420726806 +0000 UTC m=+251.057483904 (delta=88.008219ms)
	I0717 01:55:58.531617   71929 fix.go:200] guest clock delta is within tolerance: 88.008219ms
	I0717 01:55:58.531624   71929 start.go:83] releasing machines lock for "old-k8s-version-901761", held for 18.292040224s
	I0717 01:55:58.531655   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.531981   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:58.534476   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.534967   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.534996   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.535258   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535802   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535990   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.536105   71929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:58.536183   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.536244   71929 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:58.536275   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.539139   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539401   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539534   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539560   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539768   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.539815   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539845   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539968   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540000   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.540116   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540243   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.540332   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540468   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.628291   71929 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:58.656964   71929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:58.806516   71929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:58.815051   71929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:58.815113   71929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:58.838575   71929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:58.838596   71929 start.go:495] detecting cgroup driver to use...
	I0717 01:55:58.838662   71929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:58.855728   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:58.875221   71929 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:58.875285   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:58.889781   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:58.903832   71929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:59.026815   71929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:59.173879   71929 docker.go:233] disabling docker service ...
	I0717 01:55:59.173964   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:59.192906   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:59.208262   71929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:59.368178   71929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:59.500335   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:59.514795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:59.535553   71929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:55:59.535631   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.548304   71929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:59.548376   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.563066   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.578452   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.593447   71929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:59.606239   71929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:59.617051   71929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:59.617118   71929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:59.632601   71929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:59.645034   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:59.812343   71929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:59.969366   71929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:59.969444   71929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:59.974286   71929 start.go:563] Will wait 60s for crictl version
	I0717 01:55:59.974335   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:55:59.978280   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:00.020399   71929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:00.020489   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.049811   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.081952   71929 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:55:55.703286   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.07265838s)
	I0717 01:55:55.703312   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 01:55:55.703342   71603 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:55.703396   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:56.651520   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 01:55:56.651563   71603 cache_images.go:123] Successfully loaded all cached images
	I0717 01:55:56.651569   71603 cache_images.go:92] duration metric: took 14.83641531s to LoadCachedImages
	I0717 01:55:56.651581   71603 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:55:56.651702   71603 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-391501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:56.651770   71603 ssh_runner.go:195] Run: crio config
	I0717 01:55:56.700129   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:55:56.700152   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:56.700162   71603 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:56.700189   71603 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-391501 NodeName:no-preload-391501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:56.700315   71603 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-391501"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:56.700372   71603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:55:56.711859   71603 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:56.711936   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:56.721994   71603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 01:55:56.738335   71603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:55:56.755198   71603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 01:55:56.772467   71603 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:56.777580   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:56.792767   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:56.913075   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:56.930746   71603 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501 for IP: 192.168.61.174
	I0717 01:55:56.930768   71603 certs.go:194] generating shared ca certs ...
	I0717 01:55:56.930783   71603 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:56.930929   71603 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:56.930968   71603 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:56.930978   71603 certs.go:256] generating profile certs ...
	I0717 01:55:56.931050   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/client.key
	I0717 01:55:56.931112   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key.a30174c9
	I0717 01:55:56.931153   71603 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key
	I0717 01:55:56.931292   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:56.931331   71603 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:56.931344   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:56.931373   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:56.931404   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:56.931434   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:56.931478   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:56.932180   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:56.971111   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:57.016791   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:57.049766   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:57.078139   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:55:57.109781   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:55:57.137912   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:57.165141   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:55:57.190210   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:57.214366   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:57.239518   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:57.265505   71603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:57.283773   71603 ssh_runner.go:195] Run: openssl version
	I0717 01:55:57.289846   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:57.300434   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305370   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305456   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.311765   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:57.322769   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:57.334122   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338774   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338823   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.344721   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:57.356476   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:57.368672   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374055   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374107   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.380256   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:57.392428   71603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:57.397593   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:57.404378   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:57.411094   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:57.418536   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:57.425312   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:57.431841   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:57.438615   71603 kubeadm.go:392] StartCluster: {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:57.438696   71603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:57.438782   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.482932   71603 cri.go:89] found id: ""
	I0717 01:55:57.482993   71603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:57.493813   71603 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:57.493832   71603 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:57.493872   71603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:57.504757   71603 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:57.505655   71603 kubeconfig.go:125] found "no-preload-391501" server: "https://192.168.61.174:8443"
	I0717 01:55:57.507634   71603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:57.517990   71603 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I0717 01:55:57.518025   71603 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:57.518038   71603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:57.518090   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.557504   71603 cri.go:89] found id: ""
	I0717 01:55:57.557588   71603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:57.574074   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:57.583703   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:57.583724   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:57.583768   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:55:57.593924   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:57.593992   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:57.606945   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:55:57.616803   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:57.616847   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:57.627215   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.637121   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:57.637179   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.646291   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:55:57.655314   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:57.655372   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:57.666994   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:57.677582   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:57.798148   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.316598   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.518419797s)
	I0717 01:55:59.316629   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.581666   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.675003   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.748682   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:59.748771   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:56.781465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:59.280394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:00.083384   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:56:00.086085   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086454   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:56:00.086494   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086710   71929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:00.091322   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:00.104102   71929 kubeadm.go:883] updating cluster {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:00.104237   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:56:00.104309   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:00.152445   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:00.152537   71929 ssh_runner.go:195] Run: which lz4
	I0717 01:56:00.156760   71929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:00.161123   71929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:00.161149   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:56:02.031804   71929 crio.go:462] duration metric: took 1.875087246s to copy over tarball
	I0717 01:56:02.031904   71929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:58.556014   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Start
	I0717 01:55:58.556171   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring networks are active...
	I0717 01:55:58.556866   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network default is active
	I0717 01:55:58.557237   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network mk-embed-certs-940222 is active
	I0717 01:55:58.557686   71146 main.go:141] libmachine: (embed-certs-940222) Getting domain xml...
	I0717 01:55:58.558375   71146 main.go:141] libmachine: (embed-certs-940222) Creating domain...
	I0717 01:55:59.917419   71146 main.go:141] libmachine: (embed-certs-940222) Waiting to get IP...
	I0717 01:55:59.918379   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:55:59.918849   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:55:59.918908   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:55:59.918833   73097 retry.go:31] will retry after 248.560075ms: waiting for machine to come up
	I0717 01:56:00.169337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.169877   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.169898   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.169837   73097 retry.go:31] will retry after 380.159418ms: waiting for machine to come up
	I0717 01:56:00.551472   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.552033   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.552076   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.551987   73097 retry.go:31] will retry after 439.990107ms: waiting for machine to come up
	I0717 01:56:00.993776   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.994337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.994351   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.994319   73097 retry.go:31] will retry after 415.462036ms: waiting for machine to come up
	I0717 01:56:01.412114   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:01.412508   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:01.412535   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:01.412484   73097 retry.go:31] will retry after 660.852153ms: waiting for machine to come up
	I0717 01:56:02.075095   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.075519   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.075541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.075498   73097 retry.go:31] will retry after 788.200532ms: waiting for machine to come up
	I0717 01:56:00.249300   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.749610   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.823943   71603 api_server.go:72] duration metric: took 1.075254107s to wait for apiserver process to appear ...
	I0717 01:56:00.823980   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:00.824006   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:00.825286   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:01.325032   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:01.281044   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:03.281329   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:05.092637   71929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060698331s)
	I0717 01:56:05.092674   71929 crio.go:469] duration metric: took 3.060839356s to extract the tarball
	I0717 01:56:05.092682   71929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:05.135461   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:05.170789   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:05.170814   71929 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:56:05.170853   71929 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.170884   71929 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.170908   71929 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.170961   71929 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:56:05.171077   71929 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.171126   71929 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.171138   71929 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.171462   71929 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172182   71929 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:56:05.172224   71929 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.172296   71929 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.172362   71929 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172415   71929 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.172449   71929 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.372794   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415131   71929 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:56:05.415181   71929 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415231   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.419179   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.446530   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:56:05.452583   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:56:05.485692   71929 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:56:05.485734   71929 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:56:05.485780   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.486154   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.487346   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.489408   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.490486   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:56:05.494929   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.499420   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.593505   71929 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:56:05.593587   71929 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.593638   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.632564   71929 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:56:05.632615   71929 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.632667   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657745   71929 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:56:05.657792   71929 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.657852   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657863   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:56:05.657908   71929 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:56:05.657943   71929 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.657958   71929 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:56:05.657976   71929 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.657980   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658004   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658037   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.658077   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.671679   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.671708   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.736572   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:56:05.736599   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:56:05.736671   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.758178   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:56:05.758210   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:56:05.787948   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:56:06.882199   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:07.025117   71929 cache_images.go:92] duration metric: took 1.854284265s to LoadCachedImages
	W0717 01:56:07.025227   71929 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0717 01:56:07.025245   71929 kubeadm.go:934] updating node { 192.168.50.44 8443 v1.20.0 crio true true} ...
	I0717 01:56:07.025378   71929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-901761 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:07.025465   71929 ssh_runner.go:195] Run: crio config
	I0717 01:56:07.081517   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:56:07.081543   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:07.081560   71929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:07.081584   71929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.44 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901761 NodeName:old-k8s-version-901761 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:56:07.081749   71929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-901761"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:07.081833   71929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:56:07.092233   71929 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:07.092335   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:07.102086   71929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0717 01:56:07.121538   71929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:07.139112   71929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0717 01:56:07.157397   71929 ssh_runner.go:195] Run: grep 192.168.50.44	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:07.161818   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.44	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:07.174723   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:07.307484   71929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:07.325948   71929 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761 for IP: 192.168.50.44
	I0717 01:56:07.325974   71929 certs.go:194] generating shared ca certs ...
	I0717 01:56:07.326002   71929 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.326164   71929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:07.326216   71929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:07.326229   71929 certs.go:256] generating profile certs ...
	I0717 01:56:07.326351   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.key
	I0717 01:56:07.326416   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5
	I0717 01:56:07.326461   71929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key
	I0717 01:56:07.326630   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:07.326668   71929 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:07.326681   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:07.326700   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:07.326724   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:07.326767   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:07.326828   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:07.327702   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:07.377671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:02.864980   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.865620   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.865656   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.865503   73097 retry.go:31] will retry after 1.00461953s: waiting for machine to come up
	I0717 01:56:03.871702   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:03.872187   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:03.872215   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:03.872133   73097 retry.go:31] will retry after 1.15731846s: waiting for machine to come up
	I0717 01:56:05.030767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:05.031263   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:05.031285   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:05.031209   73097 retry.go:31] will retry after 1.704165162s: waiting for machine to come up
	I0717 01:56:06.737975   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:06.738337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:06.738386   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:06.738307   73097 retry.go:31] will retry after 2.014062128s: waiting for machine to come up
	I0717 01:56:06.326066   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:06.326112   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:05.780615   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:08.281127   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:07.413171   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:07.443671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:07.482883   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:56:07.527280   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:07.571200   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:07.612296   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:07.638012   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:07.662018   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:07.688033   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:07.721827   71929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:07.741517   71929 ssh_runner.go:195] Run: openssl version
	I0717 01:56:07.747466   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:07.758615   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763382   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763439   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.769358   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:07.781802   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:07.792763   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797629   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797681   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.803879   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:07.815479   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:07.828292   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832769   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832829   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.838958   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:07.850108   71929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:07.854758   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:07.860661   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:07.866484   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:07.872302   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:07.878252   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:07.884275   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:07.890148   71929 kubeadm.go:392] StartCluster: {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:07.890264   71929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:07.890343   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:07.930081   71929 cri.go:89] found id: ""
	I0717 01:56:07.930153   71929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:07.941371   71929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:07.941396   71929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:07.941445   71929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:07.955229   71929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:07.957263   71929 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:07.959002   71929 kubeconfig.go:62] /home/jenkins/minikube-integration/19264-3908/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-901761" cluster setting kubeconfig missing "old-k8s-version-901761" context setting]
	I0717 01:56:07.960384   71929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.962748   71929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:07.973815   71929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.44
	I0717 01:56:07.973851   71929 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:07.973864   71929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:07.973933   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:08.020169   71929 cri.go:89] found id: ""
	I0717 01:56:08.020247   71929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:08.038015   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:08.049272   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:08.049294   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:08.049336   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:08.058953   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:08.059025   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:08.069034   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:08.078748   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:08.078817   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:08.089660   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.099521   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:08.099583   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.109831   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:08.120340   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:08.120400   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:08.130884   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:08.141008   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:08.275189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.006841   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.255401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.376659   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.475840   71929 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:09.475937   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:09.976926   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.476192   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.976705   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.476386   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.976459   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:08.753835   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:08.754316   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:08.754347   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:08.754264   73097 retry.go:31] will retry after 2.005810517s: waiting for machine to come up
	I0717 01:56:10.761600   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:10.762022   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:10.762053   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:10.761980   73097 retry.go:31] will retry after 2.631438855s: waiting for machine to come up
	I0717 01:56:11.327297   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:11.327348   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:10.779534   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:13.278417   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:15.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:12.476819   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:12.976633   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.476076   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.476885   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.976972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.476823   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.976917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.476765   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.976609   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.395592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:13.395949   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:13.395991   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:13.395905   73097 retry.go:31] will retry after 3.565162998s: waiting for machine to come up
	I0717 01:56:16.964948   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965424   71146 main.go:141] libmachine: (embed-certs-940222) Found IP for machine: 192.168.72.225
	I0717 01:56:16.965455   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has current primary IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965465   71146 main.go:141] libmachine: (embed-certs-940222) Reserving static IP address...
	I0717 01:56:16.966065   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.966092   71146 main.go:141] libmachine: (embed-certs-940222) DBG | skip adding static IP to network mk-embed-certs-940222 - found existing host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"}
	I0717 01:56:16.966107   71146 main.go:141] libmachine: (embed-certs-940222) Reserved static IP address: 192.168.72.225
	I0717 01:56:16.966122   71146 main.go:141] libmachine: (embed-certs-940222) Waiting for SSH to be available...
	I0717 01:56:16.966150   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Getting to WaitForSSH function...
	I0717 01:56:16.968287   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968642   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.968688   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968758   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH client type: external
	I0717 01:56:16.968782   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa (-rw-------)
	I0717 01:56:16.968842   71146 main.go:141] libmachine: (embed-certs-940222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:56:16.968872   71146 main.go:141] libmachine: (embed-certs-940222) DBG | About to run SSH command:
	I0717 01:56:16.968888   71146 main.go:141] libmachine: (embed-certs-940222) DBG | exit 0
	I0717 01:56:17.090641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | SSH cmd err, output: <nil>: 
	I0717 01:56:17.091120   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetConfigRaw
	I0717 01:56:17.091720   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.094205   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.094592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094810   71146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/config.json ...
	I0717 01:56:17.095001   71146 machine.go:94] provisionDockerMachine start ...
	I0717 01:56:17.095022   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:17.095223   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.097395   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097680   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.097707   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097848   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.098021   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098170   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098311   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.098491   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.098683   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.098695   71146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:56:17.203054   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:56:17.203080   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203364   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:56:17.203402   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.206404   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.206826   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.206868   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.207076   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.207282   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207471   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207611   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.207793   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.207985   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.207997   71146 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-940222 && echo "embed-certs-940222" | sudo tee /etc/hostname
	I0717 01:56:17.326485   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-940222
	
	I0717 01:56:17.326512   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.329226   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329629   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.329659   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329834   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.329996   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330148   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330265   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.330417   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.330619   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.330642   71146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-940222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-940222/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-940222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:56:17.439258   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:56:17.439285   71146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:56:17.439315   71146 buildroot.go:174] setting up certificates
	I0717 01:56:17.439324   71146 provision.go:84] configureAuth start
	I0717 01:56:17.439332   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.439656   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.442348   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442765   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.442796   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442976   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.445418   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.445767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.445803   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.446000   71146 provision.go:143] copyHostCerts
	I0717 01:56:17.446081   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:56:17.446098   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:56:17.446171   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:56:17.446265   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:56:17.446272   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:56:17.446292   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:56:17.446346   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:56:17.446353   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:56:17.446370   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:56:17.446418   71146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.embed-certs-940222 san=[127.0.0.1 192.168.72.225 embed-certs-940222 localhost minikube]
	I0717 01:56:17.578140   71146 provision.go:177] copyRemoteCerts
	I0717 01:56:17.578195   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:56:17.578221   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.581141   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581432   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.581457   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581697   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.581892   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.582038   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.582219   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:17.664867   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:56:17.691053   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:56:17.715816   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:56:17.742153   71146 provision.go:87] duration metric: took 302.817653ms to configureAuth
	I0717 01:56:17.742180   71146 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:56:17.742405   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:17.742486   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.745102   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745369   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.745398   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745608   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.745820   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746019   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746209   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.746510   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.746738   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.746761   71146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:56:18.017395   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:56:18.017420   71146 machine.go:97] duration metric: took 922.405002ms to provisionDockerMachine
	I0717 01:56:18.017433   71146 start.go:293] postStartSetup for "embed-certs-940222" (driver="kvm2")
	I0717 01:56:18.017449   71146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:56:18.017469   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.017817   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:56:18.017846   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.020599   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021051   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.021081   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021228   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.021410   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.021556   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.021660   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.101432   71146 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:56:18.105722   71146 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:56:18.105742   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:56:18.105797   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:56:18.105866   71146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:56:18.105944   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:56:18.115228   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:18.139857   71146 start.go:296] duration metric: took 122.411322ms for postStartSetup
	I0717 01:56:18.139924   71146 fix.go:56] duration metric: took 19.608111597s for fixHost
	I0717 01:56:18.139951   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.142466   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.142865   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.142886   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.143098   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.143262   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143444   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143662   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.143852   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:18.144022   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:18.144033   71146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:56:18.243604   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181378.218663213
	
	I0717 01:56:18.243635   71146 fix.go:216] guest clock: 1721181378.218663213
	I0717 01:56:18.243644   71146 fix.go:229] Guest: 2024-07-17 01:56:18.218663213 +0000 UTC Remote: 2024-07-17 01:56:18.139933424 +0000 UTC m=+355.354069584 (delta=78.729789ms)
	I0717 01:56:18.243662   71146 fix.go:200] guest clock delta is within tolerance: 78.729789ms
	I0717 01:56:18.243667   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 19.711916707s
	I0717 01:56:18.243684   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.243952   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:18.246454   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.246881   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.246907   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.247135   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247618   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247828   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247919   71146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:56:18.247958   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.248050   71146 ssh_runner.go:195] Run: cat /version.json
	I0717 01:56:18.248074   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.250520   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250914   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.250952   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250973   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251222   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251403   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251463   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.251495   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.251668   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251747   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.251817   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251975   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.252103   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.351600   71146 ssh_runner.go:195] Run: systemctl --version
	I0717 01:56:18.357586   71146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:56:18.503767   71146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:56:18.511637   71146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:56:18.511724   71146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:56:18.530209   71146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:56:18.530235   71146 start.go:495] detecting cgroup driver to use...
	I0717 01:56:18.530303   71146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:56:18.551740   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:56:18.566975   71146 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:56:18.567044   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:56:18.585100   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:56:18.601151   71146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:56:18.735644   71146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:56:18.895436   71146 docker.go:233] disabling docker service ...
	I0717 01:56:18.895505   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:56:18.910354   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:56:18.922999   71146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:56:19.065365   71146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:56:19.179337   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:56:19.194454   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:56:19.213281   71146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:56:19.213339   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.223531   71146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:56:19.223594   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.233691   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.243695   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.255192   71146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:56:19.266082   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.276861   71146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.295903   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.306114   71146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:56:19.316226   71146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:56:19.316275   71146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:56:19.329402   71146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:56:19.340622   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:19.456624   71146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:56:19.605945   71146 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:56:19.606051   71146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:56:19.611067   71146 start.go:563] Will wait 60s for crictl version
	I0717 01:56:19.611116   71146 ssh_runner.go:195] Run: which crictl
	I0717 01:56:19.615065   71146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:19.662925   71146 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:19.662989   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.693240   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.722332   71146 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:56:16.328318   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:16.328371   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:17.780821   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:19.780921   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:17.476562   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:17.976663   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.476958   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.976722   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.476641   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.976079   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.476899   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.976553   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.476087   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.976659   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.723930   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:19.726730   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727084   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:19.727107   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727314   71146 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:19.731814   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:19.745514   71146 kubeadm.go:883] updating cluster {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:19.745622   71146 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:56:19.745677   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:19.782922   71146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:56:19.782988   71146 ssh_runner.go:195] Run: which lz4
	I0717 01:56:19.786946   71146 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:19.791298   71146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:19.791323   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:56:21.230910   71146 crio.go:462] duration metric: took 1.443984707s to copy over tarball
	I0717 01:56:21.231003   71146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:56:21.328607   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:21.328654   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.345118   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": read tcp 192.168.61.1:36190->192.168.61.174:8443: read: connection reset by peer
	I0717 01:56:21.824753   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.825500   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:22.325079   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:22.280465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:24.779729   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:22.475994   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:22.976928   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.476906   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.975980   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.476208   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.976090   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.476425   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.976072   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.976180   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.517174   71146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.286133857s)
	I0717 01:56:23.517200   71146 crio.go:469] duration metric: took 2.286263798s to extract the tarball
	I0717 01:56:23.517210   71146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:23.554084   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:23.603831   71146 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:56:23.603861   71146 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:56:23.603871   71146 kubeadm.go:934] updating node { 192.168.72.225 8443 v1.30.2 crio true true} ...
	I0717 01:56:23.604004   71146 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-940222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:23.604087   71146 ssh_runner.go:195] Run: crio config
	I0717 01:56:23.658775   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:23.658794   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:23.658803   71146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:23.658826   71146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.225 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-940222 NodeName:embed-certs-940222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:56:23.659007   71146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-940222"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:23.659092   71146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:56:23.669971   71146 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:23.670042   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:23.680949   71146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 01:56:23.698917   71146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:23.716218   71146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 01:56:23.733971   71146 ssh_runner.go:195] Run: grep 192.168.72.225	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:23.738112   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:23.750915   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:23.894690   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:23.913418   71146 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222 for IP: 192.168.72.225
	I0717 01:56:23.913440   71146 certs.go:194] generating shared ca certs ...
	I0717 01:56:23.913456   71146 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:23.913630   71146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:23.913703   71146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:23.913729   71146 certs.go:256] generating profile certs ...
	I0717 01:56:23.913856   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/client.key
	I0717 01:56:23.913926   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key.d13a776d
	I0717 01:56:23.913968   71146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key
	I0717 01:56:23.914081   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:23.914123   71146 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:23.914134   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:23.914161   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:23.914188   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:23.914214   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:23.914256   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:23.914925   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:23.961346   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:24.006765   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:24.036852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:24.064984   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 01:56:24.090778   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:24.116146   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:24.142429   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:24.168427   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:24.193691   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:24.218852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:24.242932   71146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:24.261434   71146 ssh_runner.go:195] Run: openssl version
	I0717 01:56:24.267358   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:24.280319   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285286   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285358   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.291896   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:24.304027   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:24.315542   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320212   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320283   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.326123   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:24.339982   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:24.352301   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357023   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357078   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.363112   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:24.375910   71146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:24.380986   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:24.387276   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:24.393718   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:24.400367   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:24.406600   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:24.413161   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:24.420455   71146 kubeadm.go:392] StartCluster: {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:24.420578   71146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:24.420643   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.460702   71146 cri.go:89] found id: ""
	I0717 01:56:24.460792   71146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:24.472047   71146 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:24.472064   71146 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:24.472105   71146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:24.483092   71146 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:24.484146   71146 kubeconfig.go:125] found "embed-certs-940222" server: "https://192.168.72.225:8443"
	I0717 01:56:24.486112   71146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:24.497462   71146 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.225
	I0717 01:56:24.497496   71146 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:24.497511   71146 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:24.497571   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.541423   71146 cri.go:89] found id: ""
	I0717 01:56:24.541486   71146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:24.563272   71146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:24.574859   71146 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:24.574883   71146 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:24.574930   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:24.584960   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:24.585022   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:24.595950   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:24.605686   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:24.605775   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:24.616191   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.625954   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:24.626009   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.636254   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:24.648853   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:24.648961   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:24.660491   71146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:24.675329   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:24.795437   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:25.895383   71146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099913319s)
	I0717 01:56:25.895411   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.116274   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.286149   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.355208   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:26.355296   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.855578   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.355880   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.371616   71146 api_server.go:72] duration metric: took 1.016410291s to wait for apiserver process to appear ...
	I0717 01:56:27.371642   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:27.371671   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:27.325875   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:27.325920   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:26.780264   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.279376   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.836783   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.836811   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.836823   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.883657   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.883684   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.883695   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.895244   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.895270   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:30.371799   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.375903   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.375926   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:30.872627   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.876799   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.876830   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:31.372402   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:31.376723   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 01:56:31.382638   71146 api_server.go:141] control plane version: v1.30.2
	I0717 01:56:31.382663   71146 api_server.go:131] duration metric: took 4.011014381s to wait for apiserver health ...
	I0717 01:56:31.382672   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:31.382679   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:31.384436   71146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:27.476313   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.976700   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.476585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.976008   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.477040   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.976892   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.476912   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.976626   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.476786   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.976148   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.385974   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:31.396977   71146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:31.415740   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:31.425268   71146 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:31.425306   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:31.425313   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:56:31.425320   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:56:31.425328   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:56:31.425332   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 01:56:31.425337   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:56:31.425344   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:31.425350   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 01:56:31.425360   71146 system_pods.go:74] duration metric: took 9.598959ms to wait for pod list to return data ...
	I0717 01:56:31.425368   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:31.429053   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:31.429075   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:31.429084   71146 node_conditions.go:105] duration metric: took 3.710466ms to run NodePressure ...
	I0717 01:56:31.429098   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:31.699456   71146 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703803   71146 kubeadm.go:739] kubelet initialised
	I0717 01:56:31.703825   71146 kubeadm.go:740] duration metric: took 4.345324ms waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703835   71146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:31.708962   71146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.712850   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712871   71146 pod_ready.go:81] duration metric: took 3.888169ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.712879   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712891   71146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.717134   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717156   71146 pod_ready.go:81] duration metric: took 4.256764ms for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.717163   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717169   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.721479   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721498   71146 pod_ready.go:81] duration metric: took 4.321032ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.721508   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721515   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.819188   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819217   71146 pod_ready.go:81] duration metric: took 97.692306ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.819226   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819231   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.219730   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219766   71146 pod_ready.go:81] duration metric: took 400.526796ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.219775   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219782   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.619930   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619961   71146 pod_ready.go:81] duration metric: took 400.172543ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.619971   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619978   71146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:33.019223   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019252   71146 pod_ready.go:81] duration metric: took 399.266573ms for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:33.019263   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019271   71146 pod_ready.go:38] duration metric: took 1.315427432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:33.019291   71146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:56:33.032094   71146 ops.go:34] apiserver oom_adj: -16
	I0717 01:56:33.032116   71146 kubeadm.go:597] duration metric: took 8.56004698s to restartPrimaryControlPlane
	I0717 01:56:33.032125   71146 kubeadm.go:394] duration metric: took 8.611681052s to StartCluster
	I0717 01:56:33.032140   71146 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.032204   71146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:33.033963   71146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.034198   71146 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:56:33.034337   71146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:56:33.034405   71146 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-940222"
	I0717 01:56:33.034425   71146 addons.go:69] Setting metrics-server=true in profile "embed-certs-940222"
	I0717 01:56:33.034467   71146 addons.go:234] Setting addon metrics-server=true in "embed-certs-940222"
	W0717 01:56:33.034481   71146 addons.go:243] addon metrics-server should already be in state true
	I0717 01:56:33.034516   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034465   71146 addons.go:69] Setting default-storageclass=true in profile "embed-certs-940222"
	I0717 01:56:33.034469   71146 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-940222"
	I0717 01:56:33.034589   71146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-940222"
	W0717 01:56:33.034632   71146 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:56:33.034411   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:33.034725   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034963   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.034992   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035052   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035093   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035199   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.036051   71146 out.go:177] * Verifying Kubernetes components...
	I0717 01:56:33.037606   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:33.051343   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I0717 01:56:33.051970   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.052483   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.052516   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.052671   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0717 01:56:33.052887   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.053016   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.053397   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.053443   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.053760   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.053775   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0717 01:56:33.053779   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054125   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.054139   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.054336   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.054625   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.054656   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054984   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.055524   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.055563   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.057648   71146 addons.go:234] Setting addon default-storageclass=true in "embed-certs-940222"
	W0717 01:56:33.057668   71146 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:56:33.057699   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.058003   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.058036   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.070476   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0717 01:56:33.070717   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0717 01:56:33.071094   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071289   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071648   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071665   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.071841   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071863   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.072171   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072293   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072357   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.072581   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.073298   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0717 01:56:33.073745   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.074224   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.074237   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.074585   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.074690   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.075032   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.075054   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.075361   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.077495   71146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:56:33.077496   71146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:33.079446   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:56:33.079460   71146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:56:33.079480   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.080373   71146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.080386   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:56:33.080401   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.083272   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083527   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083623   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.083641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083899   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084099   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084168   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.084184   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.084273   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.084331   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084463   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.084748   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084890   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.085028   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.092382   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0717 01:56:33.092826   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.093401   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.093418   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.094409   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.094576   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.096442   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.096730   71146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.096750   71146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:56:33.096768   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.099802   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100290   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.100368   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100472   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.100625   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.100760   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.100849   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.229494   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:33.246459   71146 node_ready.go:35] waiting up to 6m0s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:33.400804   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:56:33.400824   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:56:33.411866   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.413220   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.426485   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:56:33.426506   71146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:56:33.476707   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:33.476729   71146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:56:33.539095   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:34.542027   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130125192s)
	I0717 01:56:34.542089   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542102   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542103   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.128853338s)
	I0717 01:56:34.542139   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542151   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542420   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542442   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542442   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542447   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542450   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542468   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542474   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542483   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542505   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542517   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542711   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542727   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542715   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542835   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542847   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.549135   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.549160   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.549405   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.549428   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616065   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076933862s)
	I0717 01:56:34.616127   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616142   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616429   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616479   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616489   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.616499   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616541   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616784   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616800   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616810   71146 addons.go:475] Verifying addon metrics-server=true in "embed-certs-940222"
	I0717 01:56:34.619698   71146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:56:32.326261   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:32.326310   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:31.779064   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:33.780671   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:32.475986   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:32.976812   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.476601   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.976667   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.476897   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.976610   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.476444   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.976859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.476092   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.976979   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.620987   71146 addons.go:510] duration metric: took 1.586659462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:56:35.250360   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.251933   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.326685   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:37.326726   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:39.977828   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:39.977860   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:39.977877   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.002499   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:40.002532   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:36.280516   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:38.779351   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:40.324290   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.329888   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.329914   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:40.824413   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.831375   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.831407   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:41.324677   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:41.333259   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 01:56:41.341378   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:56:41.341426   71603 api_server.go:131] duration metric: took 40.517438405s to wait for apiserver health ...
	I0717 01:56:41.341438   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:56:41.341447   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:41.343489   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:37.476813   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:37.976779   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.476554   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.976791   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.476946   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.976044   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.476526   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.976315   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.476688   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.976203   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.750483   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:40.249907   71146 node_ready.go:49] node "embed-certs-940222" has status "Ready":"True"
	I0717 01:56:40.249934   71146 node_ready.go:38] duration metric: took 7.003442258s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:40.249945   71146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:40.255811   71146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762773   71146 pod_ready.go:92] pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:40.762795   71146 pod_ready.go:81] duration metric: took 506.956885ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762806   71146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:42.768945   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:41.344846   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:41.360339   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:41.385845   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:41.409812   71603 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:41.409843   71603 system_pods.go:61] "coredns-5cfdc65f69-ztqz8" [7c9caec8-56b6-4faa-9410-0528f108696c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:41.409849   71603 system_pods.go:61] "etcd-no-preload-391501" [603f01a1-2b07-4d1d-be14-4da4a9f1e1b2] Running
	I0717 01:56:41.409854   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [7733c5b6-5e30-472b-920d-3849f2849f7b] Running
	I0717 01:56:41.409860   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [c1afab7e-9b46-4940-94ec-e62ebc10f406] Running
	I0717 01:56:41.409865   71603 system_pods.go:61] "kube-proxy-zbqhw" [26056c12-35cd-4a3e-b40a-1eca055bd1e2] Running
	I0717 01:56:41.409869   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [98f81994-9d2a-45b8-9719-90e181ee5d6f] Running
	I0717 01:56:41.409877   71603 system_pods.go:61] "metrics-server-78fcd8795b-g9x96" [86a6a2c3-ae04-486d-9751-0cc801f9fbfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:41.409887   71603 system_pods.go:61] "storage-provisioner" [8b938905-d8e1-4129-8426-5e31a05d38db] Running
	I0717 01:56:41.409895   71603 system_pods.go:74] duration metric: took 24.018074ms to wait for pod list to return data ...
	I0717 01:56:41.409906   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:41.418825   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:41.418856   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:41.418868   71603 node_conditions.go:105] duration metric: took 8.953821ms to run NodePressure ...
	I0717 01:56:41.418892   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:41.713730   71603 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:41.719162   71603 retry.go:31] will retry after 180.435127ms: kubelet not initialised
	I0717 01:56:41.906299   71603 retry.go:31] will retry after 320.946038ms: kubelet not initialised
	I0717 01:56:42.232875   71603 retry.go:31] will retry after 423.072333ms: kubelet not initialised
	I0717 01:56:42.661412   71603 retry.go:31] will retry after 1.138026932s: kubelet not initialised
	I0717 01:56:43.809525   71603 retry.go:31] will retry after 1.187704503s: kubelet not initialised
	I0717 01:56:45.009815   71603 kubeadm.go:739] kubelet initialised
	I0717 01:56:45.009839   71603 kubeadm.go:740] duration metric: took 3.296082732s waiting for restarted kubelet to initialise ...
	I0717 01:56:45.009850   71603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:45.021149   71603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.780159   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:43.279699   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:45.280407   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:42.476301   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:42.976939   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.477021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.976910   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.476766   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.976415   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.476987   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.976666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.476735   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.976643   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.770078   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.269496   71146 pod_ready.go:92] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.269524   71146 pod_ready.go:81] duration metric: took 6.506711113s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.269538   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277267   71146 pod_ready.go:92] pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.277294   71146 pod_ready.go:81] duration metric: took 7.747271ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277309   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286697   71146 pod_ready.go:92] pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.286715   71146 pod_ready.go:81] duration metric: took 9.397698ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286723   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291876   71146 pod_ready.go:92] pod "kube-proxy-l58xk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.291897   71146 pod_ready.go:81] duration metric: took 5.168432ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291905   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296201   71146 pod_ready.go:92] pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.296215   71146 pod_ready.go:81] duration metric: took 4.304055ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296222   71146 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.027495   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:49.028127   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.779497   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:50.279065   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.476576   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:47.976502   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.476634   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.976299   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.476069   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.976086   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.476859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.976441   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.476217   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.976585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.303729   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.802778   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.029194   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:53.528363   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.778915   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:54.780173   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.476652   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:52.976136   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.976168   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.476176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.976049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.476464   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.976802   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.308491   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:56.802797   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:55.528547   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.533612   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.030406   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.278908   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:59.279393   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.476661   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:57.976021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.976940   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.476773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.976397   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.476591   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.976189   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.476917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.976263   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.806045   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.807112   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.529203   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.028677   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:01.779903   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:03.780163   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.476048   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:02.976019   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.476604   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.976602   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.477004   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.976726   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.476934   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.975985   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.476331   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.976185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.302031   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.303601   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.803763   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.528021   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:09.528499   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.780204   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:08.279630   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.476887   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:07.975972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.476034   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.976678   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:09.476927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:09.477010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:09.513328   71929 cri.go:89] found id: ""
	I0717 01:57:09.513352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.513361   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:09.513368   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:09.513418   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:09.551203   71929 cri.go:89] found id: ""
	I0717 01:57:09.551228   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.551237   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:09.551244   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:09.551308   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:09.585321   71929 cri.go:89] found id: ""
	I0717 01:57:09.585352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.585363   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:09.585370   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:09.585427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:09.623977   71929 cri.go:89] found id: ""
	I0717 01:57:09.624004   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.624012   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:09.624019   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:09.624078   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:09.663338   71929 cri.go:89] found id: ""
	I0717 01:57:09.663367   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.663374   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:09.663380   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:09.663425   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:09.696381   71929 cri.go:89] found id: ""
	I0717 01:57:09.696412   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.696423   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:09.696436   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:09.696482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:09.735892   71929 cri.go:89] found id: ""
	I0717 01:57:09.735922   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.735932   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:09.735944   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:09.736006   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:09.775878   71929 cri.go:89] found id: ""
	I0717 01:57:09.775909   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.775919   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:09.775929   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:09.775942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:09.830021   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:09.830057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:09.844753   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:09.844783   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:09.985140   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:09.985165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:09.985179   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:10.049946   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:10.049984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:10.310038   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.805565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:11.529122   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:14.028939   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:10.779935   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:13.278388   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:15.280027   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.592959   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:12.608385   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:12.608467   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:12.649900   71929 cri.go:89] found id: ""
	I0717 01:57:12.649931   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.649942   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:12.649950   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:12.650021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:12.684915   71929 cri.go:89] found id: ""
	I0717 01:57:12.684941   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.684948   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:12.684956   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:12.685010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:12.727718   71929 cri.go:89] found id: ""
	I0717 01:57:12.727758   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.727766   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:12.727788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:12.727864   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:12.767212   71929 cri.go:89] found id: ""
	I0717 01:57:12.767236   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.767244   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:12.767249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:12.767295   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:12.806301   71929 cri.go:89] found id: ""
	I0717 01:57:12.806320   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.806327   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:12.806332   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:12.806405   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:12.843118   71929 cri.go:89] found id: ""
	I0717 01:57:12.843151   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.843162   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:12.843170   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:12.843245   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:12.876671   71929 cri.go:89] found id: ""
	I0717 01:57:12.876697   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.876707   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:12.876714   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:12.876790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:12.916201   71929 cri.go:89] found id: ""
	I0717 01:57:12.916226   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.916232   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:12.916240   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:12.916250   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:12.970346   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:12.970385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:12.985029   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:12.985053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:13.068314   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:13.068340   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:13.068352   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:13.147862   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:13.147897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:15.703130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:15.717081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:15.717160   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:15.757513   71929 cri.go:89] found id: ""
	I0717 01:57:15.757538   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.757545   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:15.757552   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:15.757599   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:15.794185   71929 cri.go:89] found id: ""
	I0717 01:57:15.794218   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.794231   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:15.794238   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:15.794300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:15.830589   71929 cri.go:89] found id: ""
	I0717 01:57:15.830619   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.830628   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:15.830634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:15.830694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:15.869673   71929 cri.go:89] found id: ""
	I0717 01:57:15.869702   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.869713   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:15.869720   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:15.869782   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:15.909225   71929 cri.go:89] found id: ""
	I0717 01:57:15.909257   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.909267   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:15.909278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:15.909343   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:15.944389   71929 cri.go:89] found id: ""
	I0717 01:57:15.944417   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.944424   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:15.944430   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:15.944490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:15.982871   71929 cri.go:89] found id: ""
	I0717 01:57:15.982898   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.982907   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:15.982915   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:15.982983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:16.025674   71929 cri.go:89] found id: ""
	I0717 01:57:16.025701   71929 logs.go:276] 0 containers: []
	W0717 01:57:16.025711   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:16.025721   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:16.025736   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:16.111608   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:16.111627   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:16.111638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:16.184650   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:16.184689   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:16.230647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:16.230693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:16.286675   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:16.286710   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:15.303141   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.304891   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:16.029794   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.529463   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.780034   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:20.279882   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.802487   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:18.817483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:18.817562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:18.861623   71929 cri.go:89] found id: ""
	I0717 01:57:18.861653   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.861664   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:18.861671   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:18.861733   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:18.901335   71929 cri.go:89] found id: ""
	I0717 01:57:18.901359   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.901367   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:18.901372   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:18.901427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:18.936477   71929 cri.go:89] found id: ""
	I0717 01:57:18.936508   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.936518   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:18.936524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:18.936581   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:18.971056   71929 cri.go:89] found id: ""
	I0717 01:57:18.971087   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.971098   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:18.971106   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:18.971157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:19.005399   71929 cri.go:89] found id: ""
	I0717 01:57:19.005431   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.005453   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:19.005460   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:19.005525   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:19.040218   71929 cri.go:89] found id: ""
	I0717 01:57:19.040242   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.040250   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:19.040257   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:19.040317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:19.073365   71929 cri.go:89] found id: ""
	I0717 01:57:19.073392   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.073402   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:19.073409   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:19.073471   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:19.108670   71929 cri.go:89] found id: ""
	I0717 01:57:19.108701   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.108713   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:19.108725   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:19.108743   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:19.186077   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:19.186111   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.232181   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:19.232214   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:19.288713   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:19.288755   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:19.303089   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:19.303115   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:19.386372   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:21.886666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:21.900905   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:21.900966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:21.934955   71929 cri.go:89] found id: ""
	I0717 01:57:21.934979   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.934987   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:21.934993   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:21.935036   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:21.972180   71929 cri.go:89] found id: ""
	I0717 01:57:21.972203   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.972211   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:21.972217   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:21.972271   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:22.010452   71929 cri.go:89] found id: ""
	I0717 01:57:22.010479   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.010487   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:22.010493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:22.010547   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:22.045824   71929 cri.go:89] found id: ""
	I0717 01:57:22.045888   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.045902   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:22.045911   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:22.045984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:22.084734   71929 cri.go:89] found id: ""
	I0717 01:57:22.084760   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.084769   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:22.084774   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:22.084842   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:22.119808   71929 cri.go:89] found id: ""
	I0717 01:57:22.119838   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.119846   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:22.119852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:22.119910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:22.157537   71929 cri.go:89] found id: ""
	I0717 01:57:22.157583   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.157610   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:22.157620   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:22.157687   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:22.196021   71929 cri.go:89] found id: ""
	I0717 01:57:22.196052   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.196062   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:22.196079   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:22.196094   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:22.274350   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:22.274373   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:22.274386   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:22.364363   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:22.364401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.803506   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.306698   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:21.028767   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:23.527943   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:24.529027   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.529064   71603 pod_ready.go:81] duration metric: took 39.50788355s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.529078   71603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534655   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.534680   71603 pod_ready.go:81] duration metric: took 5.594492ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534691   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539602   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.539622   71603 pod_ready.go:81] duration metric: took 4.923891ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539631   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544475   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.544516   71603 pod_ready.go:81] duration metric: took 4.862078ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544532   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549173   71603 pod_ready.go:92] pod "kube-proxy-zbqhw" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.549193   71603 pod_ready.go:81] duration metric: took 4.653986ms for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549203   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925916   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.925944   71603 pod_ready.go:81] duration metric: took 376.73343ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925959   71603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:22.779802   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:25.280281   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.410052   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:22.410092   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:22.462289   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:22.462326   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.978560   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:24.992533   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:24.992601   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:25.027708   71929 cri.go:89] found id: ""
	I0717 01:57:25.027746   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.027754   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:25.027760   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:25.027809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:25.066946   71929 cri.go:89] found id: ""
	I0717 01:57:25.066974   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.066985   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:25.066992   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:25.067051   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:25.107209   71929 cri.go:89] found id: ""
	I0717 01:57:25.107238   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.107248   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:25.107254   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:25.107300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:25.141548   71929 cri.go:89] found id: ""
	I0717 01:57:25.141577   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.141587   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:25.141594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:25.141652   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:25.175822   71929 cri.go:89] found id: ""
	I0717 01:57:25.175853   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.175861   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:25.175866   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:25.175917   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:25.215672   71929 cri.go:89] found id: ""
	I0717 01:57:25.215705   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.215718   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:25.215726   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:25.215786   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:25.260392   71929 cri.go:89] found id: ""
	I0717 01:57:25.260422   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.260434   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:25.260442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:25.260510   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:25.309953   71929 cri.go:89] found id: ""
	I0717 01:57:25.309981   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.309990   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:25.309999   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:25.310013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:25.414204   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:25.414229   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:25.414244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:25.501849   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:25.501883   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:25.545129   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:25.545163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:25.599948   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:25.599984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.803870   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.302993   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:26.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.932999   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.280455   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:29.778817   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.115776   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:28.129710   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:28.129776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:28.165380   71929 cri.go:89] found id: ""
	I0717 01:57:28.165409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.165419   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:28.165425   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:28.165473   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:28.199225   71929 cri.go:89] found id: ""
	I0717 01:57:28.199251   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.199259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:28.199264   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:28.199314   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:28.235564   71929 cri.go:89] found id: ""
	I0717 01:57:28.235585   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.235593   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:28.235598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:28.235649   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:28.270377   71929 cri.go:89] found id: ""
	I0717 01:57:28.270409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.270427   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:28.270435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:28.270488   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:28.310132   71929 cri.go:89] found id: ""
	I0717 01:57:28.310156   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.310163   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:28.310168   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:28.310222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:28.347590   71929 cri.go:89] found id: ""
	I0717 01:57:28.347619   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.347630   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:28.347638   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:28.347696   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:28.387953   71929 cri.go:89] found id: ""
	I0717 01:57:28.387988   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.388001   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:28.388010   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:28.388072   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:28.428788   71929 cri.go:89] found id: ""
	I0717 01:57:28.428811   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.428818   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:28.428826   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:28.428838   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:28.487411   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:28.487465   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:28.501121   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:28.501152   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:28.576296   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:28.576320   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:28.576335   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:28.660246   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:28.660288   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:31.201238   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:31.221132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:31.221192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:31.279839   71929 cri.go:89] found id: ""
	I0717 01:57:31.279867   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.279876   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:31.279884   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:31.279943   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:31.359764   71929 cri.go:89] found id: ""
	I0717 01:57:31.359796   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.359807   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:31.359814   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:31.359873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:31.397045   71929 cri.go:89] found id: ""
	I0717 01:57:31.397077   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.397087   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:31.397094   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:31.397157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:31.441356   71929 cri.go:89] found id: ""
	I0717 01:57:31.441388   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.441397   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:31.441404   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:31.441459   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:31.484014   71929 cri.go:89] found id: ""
	I0717 01:57:31.484040   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.484053   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:31.484060   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:31.484124   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:31.520686   71929 cri.go:89] found id: ""
	I0717 01:57:31.520714   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.520725   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:31.520733   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:31.520792   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:31.557300   71929 cri.go:89] found id: ""
	I0717 01:57:31.557326   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.557334   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:31.557339   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:31.557387   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:31.597753   71929 cri.go:89] found id: ""
	I0717 01:57:31.597782   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.597792   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:31.597804   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:31.597818   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:31.656796   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:31.656837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:31.671287   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:31.671311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:31.742752   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:31.742772   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:31.742784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:31.828154   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:31.828186   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:29.303279   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.303332   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.434410   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.778853   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.780535   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:34.368947   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:34.384323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:34.384402   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:34.421138   71929 cri.go:89] found id: ""
	I0717 01:57:34.421171   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.421182   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:34.421190   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:34.421263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:34.459077   71929 cri.go:89] found id: ""
	I0717 01:57:34.459105   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.459116   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:34.459123   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:34.459180   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:34.492987   71929 cri.go:89] found id: ""
	I0717 01:57:34.493016   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.493027   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:34.493038   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:34.493098   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:34.527801   71929 cri.go:89] found id: ""
	I0717 01:57:34.527827   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.527836   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:34.527841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:34.527890   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:34.562877   71929 cri.go:89] found id: ""
	I0717 01:57:34.562904   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.562914   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:34.562921   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:34.562981   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:34.599387   71929 cri.go:89] found id: ""
	I0717 01:57:34.599409   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.599417   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:34.599423   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:34.599479   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:34.636087   71929 cri.go:89] found id: ""
	I0717 01:57:34.636118   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.636126   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:34.636132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:34.636194   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:34.673168   71929 cri.go:89] found id: ""
	I0717 01:57:34.673196   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.673206   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:34.673214   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:34.673226   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:34.712833   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:34.712864   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:34.765926   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:34.765959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:34.780024   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:34.780049   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:34.863080   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:34.863106   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:34.863122   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:33.803621   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.306114   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:35.933050   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.432520   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.280143   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.779168   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:37.446644   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:37.463015   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:37.463090   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:37.499563   71929 cri.go:89] found id: ""
	I0717 01:57:37.499592   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.499601   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:37.499607   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:37.499663   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:37.538516   71929 cri.go:89] found id: ""
	I0717 01:57:37.538543   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.538572   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:37.538579   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:37.538638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:37.577032   71929 cri.go:89] found id: ""
	I0717 01:57:37.577061   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.577068   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:37.577074   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:37.577129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:37.613534   71929 cri.go:89] found id: ""
	I0717 01:57:37.613563   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.613574   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:37.613582   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:37.613646   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:37.651346   71929 cri.go:89] found id: ""
	I0717 01:57:37.651370   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.651381   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:37.651389   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:37.651451   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:37.685949   71929 cri.go:89] found id: ""
	I0717 01:57:37.685989   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.686001   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:37.686008   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:37.686068   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:37.721706   71929 cri.go:89] found id: ""
	I0717 01:57:37.721744   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.721752   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:37.721759   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:37.721812   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:37.758948   71929 cri.go:89] found id: ""
	I0717 01:57:37.758976   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.758985   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:37.758994   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:37.759005   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:37.835305   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:37.835334   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:37.835349   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:37.916627   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:37.916660   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:37.956819   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:37.956851   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.007596   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:38.007641   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.522573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:40.536850   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:40.536924   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:40.576172   71929 cri.go:89] found id: ""
	I0717 01:57:40.576200   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.576211   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:40.576218   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:40.576277   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:40.611926   71929 cri.go:89] found id: ""
	I0717 01:57:40.611958   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.611969   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:40.611976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:40.612039   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:40.647225   71929 cri.go:89] found id: ""
	I0717 01:57:40.647251   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.647259   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:40.647265   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:40.647315   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:40.683871   71929 cri.go:89] found id: ""
	I0717 01:57:40.683902   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.683917   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:40.683925   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:40.683999   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:40.720941   71929 cri.go:89] found id: ""
	I0717 01:57:40.720971   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.720982   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:40.720989   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:40.721053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:40.756695   71929 cri.go:89] found id: ""
	I0717 01:57:40.756728   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.756739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:40.756746   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:40.756801   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:40.794181   71929 cri.go:89] found id: ""
	I0717 01:57:40.794214   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.794221   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:40.794226   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:40.794281   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:40.830361   71929 cri.go:89] found id: ""
	I0717 01:57:40.830396   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.830407   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:40.830417   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:40.830436   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.844827   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:40.844849   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:40.913003   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:40.913021   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:40.913035   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:40.996314   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:40.996348   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:41.041120   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:41.041151   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.801850   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.802727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:42.802814   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.934130   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.432799   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.780350   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.279971   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.593226   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:43.606395   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:43.606461   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:43.646260   71929 cri.go:89] found id: ""
	I0717 01:57:43.646290   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.646302   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:43.646310   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:43.646368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:43.681148   71929 cri.go:89] found id: ""
	I0717 01:57:43.681174   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.681182   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:43.681189   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:43.681250   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:43.716568   71929 cri.go:89] found id: ""
	I0717 01:57:43.716595   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.716606   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:43.716613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:43.716675   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:43.750507   71929 cri.go:89] found id: ""
	I0717 01:57:43.750536   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.750558   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:43.750566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:43.750627   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:43.787207   71929 cri.go:89] found id: ""
	I0717 01:57:43.787234   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.787244   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:43.787251   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:43.787311   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:43.822997   71929 cri.go:89] found id: ""
	I0717 01:57:43.823034   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.823045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:43.823052   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:43.823118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:43.860605   71929 cri.go:89] found id: ""
	I0717 01:57:43.860632   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.860640   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:43.860646   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:43.860702   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:43.897419   71929 cri.go:89] found id: ""
	I0717 01:57:43.897451   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.897463   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:43.897473   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:43.897492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:43.956361   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:43.956393   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:43.971077   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:43.971104   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:44.045234   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:44.045258   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:44.045275   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:44.122508   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:44.122544   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:46.660516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:46.675555   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:46.675651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:46.709264   71929 cri.go:89] found id: ""
	I0717 01:57:46.709291   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.709300   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:46.709306   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:46.709362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:46.744865   71929 cri.go:89] found id: ""
	I0717 01:57:46.744898   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.744908   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:46.744915   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:46.744971   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:46.785837   71929 cri.go:89] found id: ""
	I0717 01:57:46.785860   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.785870   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:46.785878   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:46.785932   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:46.828801   71929 cri.go:89] found id: ""
	I0717 01:57:46.828832   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.828842   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:46.828849   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:46.828907   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:46.863122   71929 cri.go:89] found id: ""
	I0717 01:57:46.863151   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.863162   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:46.863175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:46.863232   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:46.900705   71929 cri.go:89] found id: ""
	I0717 01:57:46.900731   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.900739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:46.900744   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:46.900790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:46.935774   71929 cri.go:89] found id: ""
	I0717 01:57:46.935816   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.935829   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:46.935840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:46.935895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:46.969274   71929 cri.go:89] found id: ""
	I0717 01:57:46.969304   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.969315   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:46.969325   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:46.969339   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:47.040318   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:47.040343   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:47.040358   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:47.119920   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:47.119954   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:47.168818   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:47.168847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:47.221983   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:47.222034   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:45.303812   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.304051   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.433020   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.932755   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.936075   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.780328   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.781850   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.736564   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:49.749966   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:49.750025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:49.788294   71929 cri.go:89] found id: ""
	I0717 01:57:49.788321   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.788332   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:49.788339   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:49.788396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:49.826406   71929 cri.go:89] found id: ""
	I0717 01:57:49.826431   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.826440   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:49.826445   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:49.826491   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:49.864978   71929 cri.go:89] found id: ""
	I0717 01:57:49.865005   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.865015   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:49.865020   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:49.865074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:49.901238   71929 cri.go:89] found id: ""
	I0717 01:57:49.901270   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.901281   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:49.901300   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:49.901366   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:49.937035   71929 cri.go:89] found id: ""
	I0717 01:57:49.937058   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.937065   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:49.937070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:49.937207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:49.977793   71929 cri.go:89] found id: ""
	I0717 01:57:49.977816   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.977823   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:49.977828   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:49.977873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:50.012915   71929 cri.go:89] found id: ""
	I0717 01:57:50.012942   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.012952   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:50.012959   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:50.013025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:50.049085   71929 cri.go:89] found id: ""
	I0717 01:57:50.049115   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.049127   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:50.049138   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:50.049156   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:50.087521   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:50.087549   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:50.140934   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:50.140978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:50.156001   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:50.156033   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:50.231780   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:50.231811   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:50.231835   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:49.802916   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:51.803036   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.432307   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.432384   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.278585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.279641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.810064   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:52.823442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:52.823508   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:52.860753   71929 cri.go:89] found id: ""
	I0717 01:57:52.860778   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.860789   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:52.860797   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:52.860852   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:52.896264   71929 cri.go:89] found id: ""
	I0717 01:57:52.896289   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.896297   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:52.896303   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:52.896349   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:52.932613   71929 cri.go:89] found id: ""
	I0717 01:57:52.932640   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.932649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:52.932657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:52.932722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:52.969691   71929 cri.go:89] found id: ""
	I0717 01:57:52.969720   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.969728   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:52.969734   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:52.969788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:53.007039   71929 cri.go:89] found id: ""
	I0717 01:57:53.007067   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.007075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:53.007081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:53.007135   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:53.047736   71929 cri.go:89] found id: ""
	I0717 01:57:53.047762   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.047772   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:53.047778   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:53.047838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:53.083192   71929 cri.go:89] found id: ""
	I0717 01:57:53.083216   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.083225   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:53.083230   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:53.083276   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:53.118509   71929 cri.go:89] found id: ""
	I0717 01:57:53.118536   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.118545   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:53.118564   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:53.118589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:53.203003   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:53.203039   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:53.244602   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:53.244627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:53.295180   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:53.295216   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:53.310777   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:53.310805   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:53.389412   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:55.890450   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:55.903768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:55.903843   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:55.944148   71929 cri.go:89] found id: ""
	I0717 01:57:55.944171   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.944179   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:55.944185   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:55.944231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:55.979945   71929 cri.go:89] found id: ""
	I0717 01:57:55.979970   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.979980   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:55.979987   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:55.980045   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:56.019057   71929 cri.go:89] found id: ""
	I0717 01:57:56.019089   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.019100   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:56.019107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:56.019162   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:56.054343   71929 cri.go:89] found id: ""
	I0717 01:57:56.054369   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.054378   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:56.054383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:56.054434   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:56.091150   71929 cri.go:89] found id: ""
	I0717 01:57:56.091179   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.091189   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:56.091197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:56.091256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:56.127502   71929 cri.go:89] found id: ""
	I0717 01:57:56.127528   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.127538   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:56.127547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:56.127602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:56.167935   71929 cri.go:89] found id: ""
	I0717 01:57:56.167961   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.167972   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:56.167979   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:56.168048   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:56.209501   71929 cri.go:89] found id: ""
	I0717 01:57:56.209527   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.209537   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:56.209547   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:56.209561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:56.257989   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:56.258023   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:56.272491   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:56.272519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:56.361622   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:56.361653   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:56.361668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:56.442953   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:56.442992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:54.302376   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.303297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.933123   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.933242   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.280399   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.779285   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.983914   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:58.997215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:58.997292   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:59.032937   71929 cri.go:89] found id: ""
	I0717 01:57:59.032964   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.032980   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:59.032996   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:59.033057   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:59.067790   71929 cri.go:89] found id: ""
	I0717 01:57:59.067811   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.067819   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:59.067825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:59.067881   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:59.107659   71929 cri.go:89] found id: ""
	I0717 01:57:59.107689   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.107699   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:59.107705   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:59.107754   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:59.150134   71929 cri.go:89] found id: ""
	I0717 01:57:59.150158   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.150168   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:59.150175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:59.150235   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:59.192351   71929 cri.go:89] found id: ""
	I0717 01:57:59.192381   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.192391   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:59.192398   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:59.192460   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:59.228177   71929 cri.go:89] found id: ""
	I0717 01:57:59.228202   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.228209   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:59.228215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:59.228261   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:59.267016   71929 cri.go:89] found id: ""
	I0717 01:57:59.267043   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.267052   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:59.267058   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:59.267109   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:59.302235   71929 cri.go:89] found id: ""
	I0717 01:57:59.302257   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.302263   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:59.302273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:59.302285   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:59.368453   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:59.368492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:59.383375   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:59.383399   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:59.454946   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:59.454975   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:59.454992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:59.539576   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:59.539609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:02.085516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:02.099848   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:02.099909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:02.136835   71929 cri.go:89] found id: ""
	I0717 01:58:02.136859   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.136867   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:02.136872   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:02.136928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:02.175304   71929 cri.go:89] found id: ""
	I0717 01:58:02.175331   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.175338   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:02.175344   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:02.175389   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:02.210922   71929 cri.go:89] found id: ""
	I0717 01:58:02.210947   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.210955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:02.210961   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:02.211018   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:02.246952   71929 cri.go:89] found id: ""
	I0717 01:58:02.246983   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.246992   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:02.246999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:02.247053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:02.284857   71929 cri.go:89] found id: ""
	I0717 01:58:02.284883   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.284892   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:02.284897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:02.284944   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:02.322941   71929 cri.go:89] found id: ""
	I0717 01:58:02.322978   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.322999   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:02.323007   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:02.323065   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:02.357904   71929 cri.go:89] found id: ""
	I0717 01:58:02.357932   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.357943   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:02.357950   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:02.358012   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:02.392291   71929 cri.go:89] found id: ""
	I0717 01:58:02.392315   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.392322   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:02.392331   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:02.392346   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:58.802622   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.303663   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.433212   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:03.433962   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:00.779479   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.779619   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.279590   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.447670   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:02.447704   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:02.462259   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:02.462284   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:02.534304   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:02.534332   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:02.534347   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:02.612757   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:02.612799   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.153573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:05.166702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:05.166775   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:05.205213   71929 cri.go:89] found id: ""
	I0717 01:58:05.205238   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.205247   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:05.205252   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:05.205305   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:05.242021   71929 cri.go:89] found id: ""
	I0717 01:58:05.242048   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.242057   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:05.242063   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:05.242118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:05.281862   71929 cri.go:89] found id: ""
	I0717 01:58:05.281889   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.281900   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:05.281908   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:05.281967   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:05.318125   71929 cri.go:89] found id: ""
	I0717 01:58:05.318157   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.318169   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:05.318177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:05.318244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:05.352470   71929 cri.go:89] found id: ""
	I0717 01:58:05.352504   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.352516   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:05.352524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:05.352595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:05.386692   71929 cri.go:89] found id: ""
	I0717 01:58:05.386722   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.386733   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:05.386741   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:05.386803   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:05.426676   71929 cri.go:89] found id: ""
	I0717 01:58:05.426731   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.426744   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:05.426751   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:05.426811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:05.467974   71929 cri.go:89] found id: ""
	I0717 01:58:05.468000   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.468010   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:05.468020   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:05.468036   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.506769   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:05.506797   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:05.561745   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:05.561782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:05.576743   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:05.576775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:05.652856   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:05.652887   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:05.652903   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:03.304109   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.803632   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.434411   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.931796   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.932902   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.779196   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.779591   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:08.244185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:08.257343   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:08.257420   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:08.297136   71929 cri.go:89] found id: ""
	I0717 01:58:08.297163   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.297174   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:08.297181   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:08.297237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:08.336099   71929 cri.go:89] found id: ""
	I0717 01:58:08.336121   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.336129   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:08.336135   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:08.336185   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:08.369668   71929 cri.go:89] found id: ""
	I0717 01:58:08.369690   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.369698   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:08.369706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:08.369756   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:08.405140   71929 cri.go:89] found id: ""
	I0717 01:58:08.405171   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.405179   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:08.405186   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:08.405249   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:08.446296   71929 cri.go:89] found id: ""
	I0717 01:58:08.446319   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.446326   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:08.446331   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:08.446377   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:08.483004   71929 cri.go:89] found id: ""
	I0717 01:58:08.483042   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.483062   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:08.483070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:08.483139   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:08.520668   71929 cri.go:89] found id: ""
	I0717 01:58:08.520699   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.520710   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:08.520717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:08.520776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:08.554711   71929 cri.go:89] found id: ""
	I0717 01:58:08.554734   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.554744   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:08.554752   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:08.554763   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:08.606972   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:08.607004   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:08.621102   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:08.621134   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:08.690424   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:08.690443   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:08.690454   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:08.775151   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:08.775193   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:11.318471   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:11.331875   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:11.331954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:11.375766   71929 cri.go:89] found id: ""
	I0717 01:58:11.375787   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.375795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:11.375801   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:11.375863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:11.417043   71929 cri.go:89] found id: ""
	I0717 01:58:11.417080   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.417103   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:11.417111   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:11.417169   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:11.459462   71929 cri.go:89] found id: ""
	I0717 01:58:11.459487   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.459495   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:11.459500   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:11.459551   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:11.516500   71929 cri.go:89] found id: ""
	I0717 01:58:11.516525   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.516533   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:11.516539   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:11.516590   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:11.573916   71929 cri.go:89] found id: ""
	I0717 01:58:11.573961   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.575159   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:11.575201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:11.575275   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:11.619446   71929 cri.go:89] found id: ""
	I0717 01:58:11.619477   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.619489   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:11.619497   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:11.619558   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:11.654766   71929 cri.go:89] found id: ""
	I0717 01:58:11.654793   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.654802   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:11.654807   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:11.654859   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:11.690306   71929 cri.go:89] found id: ""
	I0717 01:58:11.690335   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.690346   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:11.690354   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:11.690366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:11.744470   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:11.744516   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:11.758824   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:11.758856   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:11.841028   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:11.841058   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:11.841076   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:11.923299   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:11.923351   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:08.303010   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:10.303678   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.803090   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:11.933148   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.433109   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.280292   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.281580   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.466666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:14.479676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:14.479740   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:14.517890   71929 cri.go:89] found id: ""
	I0717 01:58:14.517919   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.517931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:14.517938   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:14.517998   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:14.552891   71929 cri.go:89] found id: ""
	I0717 01:58:14.552918   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.552926   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:14.552931   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:14.552992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:14.593571   71929 cri.go:89] found id: ""
	I0717 01:58:14.593596   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.593604   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:14.593609   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:14.593662   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:14.628869   71929 cri.go:89] found id: ""
	I0717 01:58:14.628897   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.628907   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:14.628913   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:14.628972   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:14.663558   71929 cri.go:89] found id: ""
	I0717 01:58:14.663586   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.663593   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:14.663599   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:14.663644   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:14.700788   71929 cri.go:89] found id: ""
	I0717 01:58:14.700824   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.700834   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:14.700843   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:14.700903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:14.737975   71929 cri.go:89] found id: ""
	I0717 01:58:14.738014   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.738025   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:14.738032   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:14.738091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:14.775419   71929 cri.go:89] found id: ""
	I0717 01:58:14.775443   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.775453   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:14.775465   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:14.775479   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:14.817635   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:14.817661   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:14.870667   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:14.870705   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:14.885208   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:14.885235   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:14.962286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:14.962318   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:14.962334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:14.803624   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.303944   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.434108   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.934577   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.779538   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.780694   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.537546   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:17.550258   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:17.550322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:17.586251   71929 cri.go:89] found id: ""
	I0717 01:58:17.586278   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.586286   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:17.586292   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:17.586348   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:17.620903   71929 cri.go:89] found id: ""
	I0717 01:58:17.620927   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.620935   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:17.620941   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:17.620992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:17.659292   71929 cri.go:89] found id: ""
	I0717 01:58:17.659319   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.659328   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:17.659334   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:17.659384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:17.695603   71929 cri.go:89] found id: ""
	I0717 01:58:17.695632   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.695642   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:17.695650   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:17.695711   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:17.731943   71929 cri.go:89] found id: ""
	I0717 01:58:17.731970   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.731978   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:17.731984   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:17.732041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:17.767257   71929 cri.go:89] found id: ""
	I0717 01:58:17.767284   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.767293   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:17.767299   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:17.767357   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:17.802455   71929 cri.go:89] found id: ""
	I0717 01:58:17.802495   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.802508   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:17.802516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:17.802602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:17.839321   71929 cri.go:89] found id: ""
	I0717 01:58:17.839351   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.839362   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:17.839374   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:17.839391   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:17.912269   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:17.912295   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:17.912311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:17.990005   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:17.990038   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:18.029933   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:18.029960   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:18.081941   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:18.081977   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:20.597325   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:20.611835   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:20.611901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:20.647899   71929 cri.go:89] found id: ""
	I0717 01:58:20.647922   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.647931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:20.647936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:20.647984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:20.683783   71929 cri.go:89] found id: ""
	I0717 01:58:20.683816   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.683827   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:20.683834   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:20.683892   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:20.721803   71929 cri.go:89] found id: ""
	I0717 01:58:20.721833   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.721844   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:20.721851   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:20.721910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:20.756148   71929 cri.go:89] found id: ""
	I0717 01:58:20.756177   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.756189   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:20.756196   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:20.756259   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:20.795976   71929 cri.go:89] found id: ""
	I0717 01:58:20.796014   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.796028   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:20.796036   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:20.796095   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:20.833775   71929 cri.go:89] found id: ""
	I0717 01:58:20.833805   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.833816   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:20.833824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:20.833891   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:20.869138   71929 cri.go:89] found id: ""
	I0717 01:58:20.869163   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.869173   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:20.869180   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:20.869237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:20.904865   71929 cri.go:89] found id: ""
	I0717 01:58:20.904893   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.904901   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:20.904910   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:20.904920   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:20.947268   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:20.947294   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:20.998541   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:20.998582   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:21.013797   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:21.013828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:21.085101   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:21.085127   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:21.085141   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:19.804949   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:22.304273   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.436176   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.933548   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.279177   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.279599   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:25.279899   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.667361   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:23.681768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:23.681828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:23.717721   71929 cri.go:89] found id: ""
	I0717 01:58:23.717748   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.717757   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:23.717763   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:23.717827   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:23.752699   71929 cri.go:89] found id: ""
	I0717 01:58:23.752728   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.752738   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:23.752745   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:23.752809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:23.790914   71929 cri.go:89] found id: ""
	I0717 01:58:23.790944   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.790955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:23.790962   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:23.791021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:23.827253   71929 cri.go:89] found id: ""
	I0717 01:58:23.827276   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.827285   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:23.827338   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:23.827392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:23.864466   71929 cri.go:89] found id: ""
	I0717 01:58:23.864510   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.864520   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:23.864527   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:23.864577   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:23.900734   71929 cri.go:89] found id: ""
	I0717 01:58:23.900775   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.900786   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:23.900794   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:23.900855   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:23.937212   71929 cri.go:89] found id: ""
	I0717 01:58:23.937236   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.937243   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:23.937249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:23.937304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:23.973730   71929 cri.go:89] found id: ""
	I0717 01:58:23.973755   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.973764   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:23.973774   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:23.973786   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:24.026122   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:24.026163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:24.040755   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:24.040784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:24.112224   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:24.112254   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:24.112277   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:24.195247   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:24.195281   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:26.738042   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:26.751545   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:26.751602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:26.786778   71929 cri.go:89] found id: ""
	I0717 01:58:26.786813   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.786824   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:26.786831   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:26.786889   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:26.828776   71929 cri.go:89] found id: ""
	I0717 01:58:26.828806   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.828818   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:26.828825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:26.828887   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:26.868439   71929 cri.go:89] found id: ""
	I0717 01:58:26.868468   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.868479   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:26.868486   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:26.868546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:26.900249   71929 cri.go:89] found id: ""
	I0717 01:58:26.900282   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.900292   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:26.900297   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:26.900344   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:26.933763   71929 cri.go:89] found id: ""
	I0717 01:58:26.933798   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.933808   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:26.933816   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:26.933882   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:26.968681   71929 cri.go:89] found id: ""
	I0717 01:58:26.968712   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.968722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:26.968729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:26.968788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:27.002081   71929 cri.go:89] found id: ""
	I0717 01:58:27.002113   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.002128   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:27.002135   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:27.002196   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:27.035138   71929 cri.go:89] found id: ""
	I0717 01:58:27.035161   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.035170   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:27.035177   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:27.035189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:27.091207   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:27.091244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:27.105765   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:27.105793   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:27.175533   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:27.175563   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:27.175580   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:27.260903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:27.260951   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:24.802002   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.803330   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.432259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:28.433226   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:27.280206   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.781139   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.802451   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:29.816503   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:29.816573   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:29.854887   71929 cri.go:89] found id: ""
	I0717 01:58:29.854921   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.854931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:29.854936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:29.854983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:29.887529   71929 cri.go:89] found id: ""
	I0717 01:58:29.887559   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.887570   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:29.887577   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:29.887638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:29.924995   71929 cri.go:89] found id: ""
	I0717 01:58:29.925020   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.925028   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:29.925034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:29.925091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:29.960064   71929 cri.go:89] found id: ""
	I0717 01:58:29.960092   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.960104   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:29.960111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:29.960178   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:29.995408   71929 cri.go:89] found id: ""
	I0717 01:58:29.995431   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.995438   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:29.995443   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:29.995494   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:30.028219   71929 cri.go:89] found id: ""
	I0717 01:58:30.028247   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.028254   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:30.028260   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:30.028309   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:30.062529   71929 cri.go:89] found id: ""
	I0717 01:58:30.062576   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.062589   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:30.062597   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:30.062664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:30.095854   71929 cri.go:89] found id: ""
	I0717 01:58:30.095882   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.095893   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:30.095904   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:30.095919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:30.148083   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:30.148114   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:30.161861   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:30.161892   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:30.236474   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:30.236503   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:30.236519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:30.319691   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:30.319720   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:28.804656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:31.302637   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:30.932659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.934225   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.279141   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:34.279312   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.867821   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:32.881480   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:32.881541   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:32.918289   71929 cri.go:89] found id: ""
	I0717 01:58:32.918316   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.918327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:32.918335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:32.918396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:32.955383   71929 cri.go:89] found id: ""
	I0717 01:58:32.955417   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.955426   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:32.955433   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:32.955498   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:32.990432   71929 cri.go:89] found id: ""
	I0717 01:58:32.990460   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.990467   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:32.990472   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:32.990531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:33.034653   71929 cri.go:89] found id: ""
	I0717 01:58:33.034685   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.034697   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:33.034703   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:33.034763   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:33.077875   71929 cri.go:89] found id: ""
	I0717 01:58:33.077911   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.077919   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:33.077926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:33.077988   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:33.114800   71929 cri.go:89] found id: ""
	I0717 01:58:33.114840   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.114852   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:33.114864   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:33.114946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:33.151095   71929 cri.go:89] found id: ""
	I0717 01:58:33.151229   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.151242   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:33.151249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:33.151324   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:33.190100   71929 cri.go:89] found id: ""
	I0717 01:58:33.190128   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.190138   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:33.190149   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:33.190163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:33.271195   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:33.271231   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:33.317539   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:33.317569   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.370188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:33.370224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:33.385016   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:33.385045   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:33.460017   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:35.960499   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:35.974504   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:35.974583   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:36.008652   71929 cri.go:89] found id: ""
	I0717 01:58:36.008696   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.008704   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:36.008710   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:36.008770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:36.044068   71929 cri.go:89] found id: ""
	I0717 01:58:36.044097   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.044106   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:36.044113   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:36.044174   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:36.083572   71929 cri.go:89] found id: ""
	I0717 01:58:36.083602   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.083610   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:36.083616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:36.083682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:36.116716   71929 cri.go:89] found id: ""
	I0717 01:58:36.116744   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.116753   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:36.116761   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:36.116820   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:36.156042   71929 cri.go:89] found id: ""
	I0717 01:58:36.156069   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.156080   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:36.156087   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:36.156148   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:36.192005   71929 cri.go:89] found id: ""
	I0717 01:58:36.192033   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.192045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:36.192055   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:36.192116   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:36.228720   71929 cri.go:89] found id: ""
	I0717 01:58:36.228751   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.228763   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:36.228769   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:36.228817   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:36.263835   71929 cri.go:89] found id: ""
	I0717 01:58:36.263862   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.263872   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:36.263882   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:36.263897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:36.278545   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:36.278609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:36.361182   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:36.361208   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:36.361225   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:36.447797   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:36.447832   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:36.492167   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:36.492196   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.304750   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.803867   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.432659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:37.433360   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.433481   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:36.282525   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:38.779592   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.045613   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:39.058615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:39.058688   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:39.094625   71929 cri.go:89] found id: ""
	I0717 01:58:39.094672   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.094684   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:39.094692   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:39.094755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:39.132856   71929 cri.go:89] found id: ""
	I0717 01:58:39.132887   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.132898   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:39.132905   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:39.132966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:39.171017   71929 cri.go:89] found id: ""
	I0717 01:58:39.171037   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.171044   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:39.171051   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:39.171112   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:39.210146   71929 cri.go:89] found id: ""
	I0717 01:58:39.210176   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.210186   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:39.210193   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:39.210269   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:39.244307   71929 cri.go:89] found id: ""
	I0717 01:58:39.244332   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.244342   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:39.244349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:39.244411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:39.279649   71929 cri.go:89] found id: ""
	I0717 01:58:39.279675   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.279682   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:39.279688   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:39.279755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:39.317699   71929 cri.go:89] found id: ""
	I0717 01:58:39.317726   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.317735   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:39.317742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:39.317789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:39.352319   71929 cri.go:89] found id: ""
	I0717 01:58:39.352351   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.352365   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:39.352377   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:39.352392   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:39.404153   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:39.404188   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:39.419796   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:39.419828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:39.495463   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:39.495485   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:39.495499   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:39.576742   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:39.576795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.132481   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:42.145588   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:42.145658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:42.181231   71929 cri.go:89] found id: ""
	I0717 01:58:42.181257   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.181265   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:42.181270   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:42.181321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:42.216876   71929 cri.go:89] found id: ""
	I0717 01:58:42.216905   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.216917   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:42.216923   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:42.216984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:42.256918   71929 cri.go:89] found id: ""
	I0717 01:58:42.256948   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.256959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:42.256967   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:42.257022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:42.291930   71929 cri.go:89] found id: ""
	I0717 01:58:42.291957   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.291964   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:42.291975   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:42.292035   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:42.329927   71929 cri.go:89] found id: ""
	I0717 01:58:42.329954   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.329964   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:42.329970   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:42.330014   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:42.364041   71929 cri.go:89] found id: ""
	I0717 01:58:42.364072   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.364085   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:42.364093   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:42.364150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:38.302060   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.302711   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.303560   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:41.437100   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.932845   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.780109   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.280118   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.400751   71929 cri.go:89] found id: ""
	I0717 01:58:42.400775   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.400784   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:42.400790   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:42.400840   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:42.438200   71929 cri.go:89] found id: ""
	I0717 01:58:42.438228   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.438240   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:42.438251   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:42.438265   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:42.455268   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:42.455303   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:42.537344   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:42.537368   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:42.537381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:42.618487   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:42.618522   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.661273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:42.661299   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:45.212631   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:45.226247   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:45.226330   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:45.263067   71929 cri.go:89] found id: ""
	I0717 01:58:45.263098   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.263110   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:45.263117   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:45.263177   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:45.299025   71929 cri.go:89] found id: ""
	I0717 01:58:45.299056   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.299067   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:45.299074   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:45.299137   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:45.346828   71929 cri.go:89] found id: ""
	I0717 01:58:45.346858   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.346868   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:45.346876   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:45.346938   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:45.390879   71929 cri.go:89] found id: ""
	I0717 01:58:45.390905   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.390913   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:45.390918   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:45.390966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:45.426794   71929 cri.go:89] found id: ""
	I0717 01:58:45.426823   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.426834   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:45.426841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:45.426902   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:45.463834   71929 cri.go:89] found id: ""
	I0717 01:58:45.463863   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.463873   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:45.463880   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:45.463942   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:45.500660   71929 cri.go:89] found id: ""
	I0717 01:58:45.500689   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.500701   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:45.500708   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:45.500766   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:45.537332   71929 cri.go:89] found id: ""
	I0717 01:58:45.537356   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.537364   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:45.537373   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:45.537388   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:45.551194   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:45.551222   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:45.623863   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:45.623892   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:45.623906   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:45.699740   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:45.699782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:45.739580   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:45.739613   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:44.803138   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:47.302471   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:46.434311   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.933004   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:45.779778   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.279595   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.300789   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:48.315608   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:48.315667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:48.353050   71929 cri.go:89] found id: ""
	I0717 01:58:48.353076   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.353084   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:48.353089   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:48.353133   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:48.394789   71929 cri.go:89] found id: ""
	I0717 01:58:48.394817   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.394829   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:48.394837   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:48.394900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:48.433430   71929 cri.go:89] found id: ""
	I0717 01:58:48.433457   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.433468   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:48.433475   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:48.433530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:48.467215   71929 cri.go:89] found id: ""
	I0717 01:58:48.467243   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.467254   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:48.467262   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:48.467318   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:48.501087   71929 cri.go:89] found id: ""
	I0717 01:58:48.501120   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.501131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:48.501138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:48.501204   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:48.538648   71929 cri.go:89] found id: ""
	I0717 01:58:48.538683   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.538696   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:48.538706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:48.538762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:48.573006   71929 cri.go:89] found id: ""
	I0717 01:58:48.573030   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.573040   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:48.573047   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:48.573106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:48.608779   71929 cri.go:89] found id: ""
	I0717 01:58:48.608803   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.608813   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:48.608824   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:48.608837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:48.659250   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:48.659290   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:48.673418   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:48.673449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:48.748175   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:48.748196   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:48.748207   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:48.824238   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:48.824274   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:51.367155   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:51.382458   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:51.382527   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:51.424005   71929 cri.go:89] found id: ""
	I0717 01:58:51.424040   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.424051   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:51.424059   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:51.424117   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:51.463318   71929 cri.go:89] found id: ""
	I0717 01:58:51.463348   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.463357   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:51.463363   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:51.463414   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:51.502261   71929 cri.go:89] found id: ""
	I0717 01:58:51.502290   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.502301   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:51.502309   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:51.502362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:51.536277   71929 cri.go:89] found id: ""
	I0717 01:58:51.536308   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.536319   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:51.536327   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:51.536392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:51.580598   71929 cri.go:89] found id: ""
	I0717 01:58:51.580629   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.580640   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:51.580648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:51.580726   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:51.618666   71929 cri.go:89] found id: ""
	I0717 01:58:51.618690   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.618697   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:51.618702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:51.618747   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:51.654742   71929 cri.go:89] found id: ""
	I0717 01:58:51.654777   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.654790   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:51.654799   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:51.654863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:51.698006   71929 cri.go:89] found id: ""
	I0717 01:58:51.698034   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.698043   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:51.698051   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:51.698062   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:51.754812   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:51.754852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:51.771887   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:51.771919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:51.859627   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:51.859657   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:51.859675   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:51.946633   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:51.946673   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:49.302540   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.803884   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.433981   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.933306   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:50.781428   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.279780   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:54.494188   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:54.509111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:54.509190   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:54.546424   71929 cri.go:89] found id: ""
	I0717 01:58:54.546454   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.546464   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:54.546471   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:54.546532   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:54.586811   71929 cri.go:89] found id: ""
	I0717 01:58:54.586841   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.586853   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:54.586860   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:54.586918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:54.627350   71929 cri.go:89] found id: ""
	I0717 01:58:54.627375   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.627383   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:54.627388   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:54.627438   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:54.665901   71929 cri.go:89] found id: ""
	I0717 01:58:54.665941   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.665954   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:54.665974   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:54.666041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:54.702921   71929 cri.go:89] found id: ""
	I0717 01:58:54.702948   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.702958   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:54.702965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:54.703027   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:54.737378   71929 cri.go:89] found id: ""
	I0717 01:58:54.737406   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.737414   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:54.737421   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:54.737469   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:54.771924   71929 cri.go:89] found id: ""
	I0717 01:58:54.771954   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.771964   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:54.771971   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:54.772055   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:54.812939   71929 cri.go:89] found id: ""
	I0717 01:58:54.812972   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.812983   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:54.812995   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:54.813010   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:54.862979   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:54.863013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:54.877467   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:54.877504   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:54.953924   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:54.953950   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:54.953963   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:55.032019   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:55.032052   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:54.302727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:56.311656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.933968   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:58.432611   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.778263   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.781311   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.278937   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.573130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:57.591689   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:57.591762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:57.626444   71929 cri.go:89] found id: ""
	I0717 01:58:57.626469   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.626479   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:57.626486   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:57.626570   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:57.661280   71929 cri.go:89] found id: ""
	I0717 01:58:57.661305   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.661314   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:57.661321   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:57.661376   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:57.695678   71929 cri.go:89] found id: ""
	I0717 01:58:57.695703   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.695711   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:57.695717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:57.695762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:57.729705   71929 cri.go:89] found id: ""
	I0717 01:58:57.729734   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.729742   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:57.729748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:57.729804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:57.763338   71929 cri.go:89] found id: ""
	I0717 01:58:57.763365   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.763373   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:57.763387   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:57.763433   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:57.800576   71929 cri.go:89] found id: ""
	I0717 01:58:57.800600   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.800608   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:57.800615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:57.800701   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:57.842401   71929 cri.go:89] found id: ""
	I0717 01:58:57.842428   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.842439   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:57.842446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:57.842503   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:57.880355   71929 cri.go:89] found id: ""
	I0717 01:58:57.880379   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.880387   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:57.880395   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:57.880412   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:57.938215   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:57.938252   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:57.952835   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:57.952876   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:58.027203   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:58.027231   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:58.027246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:58.108442   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:58.108483   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:00.648580   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:00.662596   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:00.662667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:00.696315   71929 cri.go:89] found id: ""
	I0717 01:59:00.696342   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.696351   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:00.696356   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:00.696411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:00.732117   71929 cri.go:89] found id: ""
	I0717 01:59:00.732147   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.732158   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:00.732164   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:00.732212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:00.768747   71929 cri.go:89] found id: ""
	I0717 01:59:00.768779   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.768790   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:00.768797   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:00.768856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:00.807557   71929 cri.go:89] found id: ""
	I0717 01:59:00.807585   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.807592   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:00.807598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:00.807651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:00.844127   71929 cri.go:89] found id: ""
	I0717 01:59:00.844152   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.844161   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:00.844166   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:00.844222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:00.879565   71929 cri.go:89] found id: ""
	I0717 01:59:00.879590   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.879597   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:00.879613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:00.879684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:00.917352   71929 cri.go:89] found id: ""
	I0717 01:59:00.917379   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.917387   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:00.917393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:00.917440   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:00.952603   71929 cri.go:89] found id: ""
	I0717 01:59:00.952630   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.952637   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:00.952647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:00.952688   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:01.007203   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:01.007242   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:01.021476   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:01.021512   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:01.102283   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:01.102306   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:01.102320   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:01.175736   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:01.175771   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:58.803034   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.803718   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.932781   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.433188   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:02.281269   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:04.779257   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.717612   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:03.732446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:03.732511   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:03.767485   71929 cri.go:89] found id: ""
	I0717 01:59:03.767519   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.767533   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:03.767542   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:03.767607   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:03.803961   71929 cri.go:89] found id: ""
	I0717 01:59:03.803989   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.804000   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:03.804007   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:03.804074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:03.842734   71929 cri.go:89] found id: ""
	I0717 01:59:03.842768   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.842780   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:03.842788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:03.842915   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:03.883571   71929 cri.go:89] found id: ""
	I0717 01:59:03.883598   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.883608   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:03.883616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:03.883682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:03.922037   71929 cri.go:89] found id: ""
	I0717 01:59:03.922065   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.922076   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:03.922084   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:03.922143   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:03.961135   71929 cri.go:89] found id: ""
	I0717 01:59:03.961165   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.961176   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:03.961183   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:03.961244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:03.995542   71929 cri.go:89] found id: ""
	I0717 01:59:03.995570   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.995580   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:03.995589   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:03.995647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:04.030142   71929 cri.go:89] found id: ""
	I0717 01:59:04.030170   71929 logs.go:276] 0 containers: []
	W0717 01:59:04.030178   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:04.030187   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:04.030198   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:04.110329   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:04.110366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:04.152194   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:04.152224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:04.204012   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:04.204048   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:04.218261   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:04.218291   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:04.290786   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:06.791166   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:06.806662   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:06.806722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:06.841447   71929 cri.go:89] found id: ""
	I0717 01:59:06.841476   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.841486   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:06.841494   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:06.841554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:06.879920   71929 cri.go:89] found id: ""
	I0717 01:59:06.879956   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.879971   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:06.879976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:06.880033   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:06.914436   71929 cri.go:89] found id: ""
	I0717 01:59:06.914465   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.914476   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:06.914484   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:06.914566   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:06.952098   71929 cri.go:89] found id: ""
	I0717 01:59:06.952127   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.952135   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:06.952141   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:06.952187   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:06.988054   71929 cri.go:89] found id: ""
	I0717 01:59:06.988085   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.988096   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:06.988103   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:06.988168   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:07.026633   71929 cri.go:89] found id: ""
	I0717 01:59:07.026658   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.026670   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:07.026676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:07.026732   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:07.064433   71929 cri.go:89] found id: ""
	I0717 01:59:07.064454   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.064463   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:07.064468   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:07.064545   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:07.108352   71929 cri.go:89] found id: ""
	I0717 01:59:07.108385   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.108396   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:07.108410   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:07.108428   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:07.163554   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:07.163593   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:07.177221   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:07.177249   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:07.249712   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:07.249735   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:07.249748   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:07.333011   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:07.333044   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:03.303048   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.304001   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.314317   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.932370   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.933031   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.933728   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:06.780342   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.279683   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.873187   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:09.887579   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:09.887658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:09.923675   71929 cri.go:89] found id: ""
	I0717 01:59:09.923706   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.923716   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:09.923724   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:09.923789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:09.961248   71929 cri.go:89] found id: ""
	I0717 01:59:09.961278   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.961288   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:09.961296   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:09.961354   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:10.000069   71929 cri.go:89] found id: ""
	I0717 01:59:10.000094   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.000101   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:10.000107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:10.000157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:10.036784   71929 cri.go:89] found id: ""
	I0717 01:59:10.036808   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.036815   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:10.036820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:10.036869   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:10.072746   71929 cri.go:89] found id: ""
	I0717 01:59:10.072778   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.072789   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:10.072796   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:10.072856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:10.109520   71929 cri.go:89] found id: ""
	I0717 01:59:10.109544   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.109552   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:10.109557   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:10.109608   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:10.142521   71929 cri.go:89] found id: ""
	I0717 01:59:10.142565   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.142576   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:10.142584   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:10.142647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:10.175772   71929 cri.go:89] found id: ""
	I0717 01:59:10.175800   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.175812   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:10.175823   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:10.175837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:10.213534   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:10.213561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:10.266449   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:10.266485   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:10.282204   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:10.282234   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:10.353974   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:10.354004   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:10.354017   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:09.802047   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.802200   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.433722   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:14.932285   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.780394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:13.781691   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.936509   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:12.951547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:12.951616   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:12.987833   71929 cri.go:89] found id: ""
	I0717 01:59:12.987860   71929 logs.go:276] 0 containers: []
	W0717 01:59:12.987868   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:12.987873   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:12.987922   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:13.026500   71929 cri.go:89] found id: ""
	I0717 01:59:13.026529   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.026539   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:13.026546   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:13.026625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:13.061631   71929 cri.go:89] found id: ""
	I0717 01:59:13.061664   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.061674   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:13.061682   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:13.061745   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:13.099449   71929 cri.go:89] found id: ""
	I0717 01:59:13.099476   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.099487   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:13.099494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:13.099565   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:13.137271   71929 cri.go:89] found id: ""
	I0717 01:59:13.137299   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.137309   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:13.137317   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:13.137384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:13.174432   71929 cri.go:89] found id: ""
	I0717 01:59:13.174462   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.174472   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:13.174478   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:13.174539   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:13.212820   71929 cri.go:89] found id: ""
	I0717 01:59:13.212845   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.212855   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:13.212865   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:13.212930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:13.245961   71929 cri.go:89] found id: ""
	I0717 01:59:13.245993   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.246004   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:13.246014   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:13.246028   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:13.284801   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:13.284828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.338476   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:13.338511   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:13.352751   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:13.352777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:13.434001   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:13.434035   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:13.434050   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.022525   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:16.036863   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:16.036941   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:16.074370   71929 cri.go:89] found id: ""
	I0717 01:59:16.074398   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.074409   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:16.074416   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:16.074476   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:16.112239   71929 cri.go:89] found id: ""
	I0717 01:59:16.112267   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.112276   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:16.112281   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:16.112329   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:16.147398   71929 cri.go:89] found id: ""
	I0717 01:59:16.147422   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.147429   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:16.147435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:16.147490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:16.182112   71929 cri.go:89] found id: ""
	I0717 01:59:16.182141   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.182149   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:16.182155   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:16.182203   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:16.219134   71929 cri.go:89] found id: ""
	I0717 01:59:16.219163   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.219174   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:16.219182   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:16.219238   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:16.255892   71929 cri.go:89] found id: ""
	I0717 01:59:16.255924   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.255934   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:16.255943   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:16.256003   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:16.291202   71929 cri.go:89] found id: ""
	I0717 01:59:16.291228   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.291238   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:16.291245   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:16.291304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:16.330748   71929 cri.go:89] found id: ""
	I0717 01:59:16.330779   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.330790   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:16.330801   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:16.330815   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:16.344628   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:16.344668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:16.415735   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:16.415761   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:16.415775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.499411   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:16.499449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:16.541244   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:16.541270   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.802477   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.311229   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.933493   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.934299   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.279421   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.778998   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:19.095060   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:19.107920   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:19.107976   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:19.143446   71929 cri.go:89] found id: ""
	I0717 01:59:19.143476   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.143485   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:19.143490   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:19.143550   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:19.179216   71929 cri.go:89] found id: ""
	I0717 01:59:19.179247   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.179259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:19.179266   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:19.179317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:19.212468   71929 cri.go:89] found id: ""
	I0717 01:59:19.212498   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.212508   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:19.212516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:19.212574   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:19.245019   71929 cri.go:89] found id: ""
	I0717 01:59:19.245047   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.245058   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:19.245065   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:19.245123   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:19.278430   71929 cri.go:89] found id: ""
	I0717 01:59:19.278457   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.278467   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:19.278474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:19.278530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:19.317685   71929 cri.go:89] found id: ""
	I0717 01:59:19.317714   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.317722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:19.317729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:19.317783   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:19.352938   71929 cri.go:89] found id: ""
	I0717 01:59:19.352974   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.352986   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:19.353000   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:19.353052   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:19.387238   71929 cri.go:89] found id: ""
	I0717 01:59:19.387272   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.387283   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:19.387295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:19.387314   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:19.440138   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:19.440171   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:19.456372   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:19.456402   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:19.527881   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:19.527906   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:19.527921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:19.611903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:19.611937   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:22.160422   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:22.172802   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:22.172862   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:22.209283   71929 cri.go:89] found id: ""
	I0717 01:59:22.209315   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.209327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:22.209335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:22.209396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:22.243927   71929 cri.go:89] found id: ""
	I0717 01:59:22.243955   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.243965   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:22.243972   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:22.244022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:22.276730   71929 cri.go:89] found id: ""
	I0717 01:59:22.276754   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.276761   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:22.276767   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:22.276814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:22.319378   71929 cri.go:89] found id: ""
	I0717 01:59:22.319407   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.319418   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:22.319425   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:22.319482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:22.358272   71929 cri.go:89] found id: ""
	I0717 01:59:22.358298   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.358307   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:22.358312   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:22.358362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:22.395358   71929 cri.go:89] found id: ""
	I0717 01:59:22.395393   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.395405   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:22.395414   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:22.395477   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:18.802881   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.303532   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.433636   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.932345   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.279596   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.279700   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.280649   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:22.435158   71929 cri.go:89] found id: ""
	I0717 01:59:22.435184   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.435194   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:22.435201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:22.435248   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:22.471553   71929 cri.go:89] found id: ""
	I0717 01:59:22.471588   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.471595   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:22.471604   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:22.471616   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:22.523133   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:22.523169   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:22.539212   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:22.539246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:22.615707   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:22.615729   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:22.615744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:22.696758   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:22.696795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:25.238496   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:25.252882   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:25.252946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:25.290173   71929 cri.go:89] found id: ""
	I0717 01:59:25.290197   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.290205   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:25.290210   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:25.290263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:25.325926   71929 cri.go:89] found id: ""
	I0717 01:59:25.325968   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.325979   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:25.325985   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:25.326032   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:25.358310   71929 cri.go:89] found id: ""
	I0717 01:59:25.358362   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.358371   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:25.358377   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:25.358426   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:25.393575   71929 cri.go:89] found id: ""
	I0717 01:59:25.393605   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.393615   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:25.393622   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:25.393684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:25.429357   71929 cri.go:89] found id: ""
	I0717 01:59:25.429448   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.429466   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:25.429474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:25.429546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:25.466992   71929 cri.go:89] found id: ""
	I0717 01:59:25.467020   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.467028   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:25.467034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:25.467080   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:25.503545   71929 cri.go:89] found id: ""
	I0717 01:59:25.503575   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.503587   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:25.503594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:25.503643   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:25.542957   71929 cri.go:89] found id: ""
	I0717 01:59:25.542983   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.542993   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:25.543003   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:25.543015   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:25.598813   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:25.598852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:25.618060   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:25.618098   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:25.690079   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:25.690105   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:25.690119   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:25.765956   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:25.765994   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:23.803366   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.804525   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.932447   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.933276   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.933461   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.286160   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.781318   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:28.311715   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:28.325493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:28.325554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:28.365783   71929 cri.go:89] found id: ""
	I0717 01:59:28.365810   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.365821   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:28.365829   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:28.365885   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:28.401847   71929 cri.go:89] found id: ""
	I0717 01:59:28.401875   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.401883   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:28.401895   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:28.401954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:28.442236   71929 cri.go:89] found id: ""
	I0717 01:59:28.442261   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.442272   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:28.442278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:28.442340   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:28.476832   71929 cri.go:89] found id: ""
	I0717 01:59:28.476857   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.476866   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:28.476873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:28.476928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:28.512040   71929 cri.go:89] found id: ""
	I0717 01:59:28.512068   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.512075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:28.512081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:28.512136   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:28.547516   71929 cri.go:89] found id: ""
	I0717 01:59:28.547547   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.547558   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:28.547566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:28.547625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:28.580380   71929 cri.go:89] found id: ""
	I0717 01:59:28.580406   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.580417   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:28.580427   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:28.580485   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:28.616029   71929 cri.go:89] found id: ""
	I0717 01:59:28.616059   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.616069   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:28.616080   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:28.616095   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:28.670188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:28.670230   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:28.687315   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:28.687355   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:28.763591   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:28.763612   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:28.763627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:28.848925   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:28.848959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:31.388294   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:31.404748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:31.404814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:31.446437   71929 cri.go:89] found id: ""
	I0717 01:59:31.446468   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.446478   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:31.446484   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:31.446531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:31.487797   71929 cri.go:89] found id: ""
	I0717 01:59:31.487828   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.487840   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:31.487847   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:31.487895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:31.525327   71929 cri.go:89] found id: ""
	I0717 01:59:31.525354   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.525368   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:31.525375   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:31.525436   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:31.564106   71929 cri.go:89] found id: ""
	I0717 01:59:31.564154   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.564166   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:31.564173   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:31.564234   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:31.603345   71929 cri.go:89] found id: ""
	I0717 01:59:31.603374   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.603385   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:31.603393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:31.603456   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:31.641727   71929 cri.go:89] found id: ""
	I0717 01:59:31.641753   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.641769   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:31.641776   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:31.641837   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:31.680825   71929 cri.go:89] found id: ""
	I0717 01:59:31.680856   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.680866   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:31.680873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:31.680930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:31.714325   71929 cri.go:89] found id: ""
	I0717 01:59:31.714348   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.714355   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:31.714363   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:31.714374   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:31.765899   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:31.765934   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:31.781417   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:31.781447   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:31.857586   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:31.857607   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:31.857622   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:31.937171   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:31.937197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:28.304014   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:30.802684   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:32.803604   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.933945   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.435259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.785641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.279814   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.478176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:34.492153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:34.492223   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:34.526959   71929 cri.go:89] found id: ""
	I0717 01:59:34.526984   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.526998   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:34.527006   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:34.527064   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:34.564485   71929 cri.go:89] found id: ""
	I0717 01:59:34.564534   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.564546   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:34.564591   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:34.564706   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:34.604611   71929 cri.go:89] found id: ""
	I0717 01:59:34.604637   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.604649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:34.604657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:34.604718   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:34.640851   71929 cri.go:89] found id: ""
	I0717 01:59:34.640882   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.640892   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:34.640897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:34.640956   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:34.675828   71929 cri.go:89] found id: ""
	I0717 01:59:34.675856   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.675863   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:34.675869   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:34.675918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:34.710468   71929 cri.go:89] found id: ""
	I0717 01:59:34.710496   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.710506   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:34.710514   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:34.710595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:34.749218   71929 cri.go:89] found id: ""
	I0717 01:59:34.749249   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.749260   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:34.749267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:34.749328   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:34.784934   71929 cri.go:89] found id: ""
	I0717 01:59:34.784969   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.784979   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:34.784990   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:34.785006   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:34.799836   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:34.799870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:34.870218   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:34.870239   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:34.870254   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:34.948782   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:34.948817   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:34.992295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:34.992324   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:34.803649   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.304530   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.933199   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:39.432504   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.280185   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:38.280499   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.545759   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:37.559648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:37.559724   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:37.596642   71929 cri.go:89] found id: ""
	I0717 01:59:37.596696   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.596707   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:37.596715   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:37.596770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:37.637251   71929 cri.go:89] found id: ""
	I0717 01:59:37.637283   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.637312   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:37.637318   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:37.637372   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:37.672811   71929 cri.go:89] found id: ""
	I0717 01:59:37.672839   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.672847   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:37.672852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:37.672909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:37.706864   71929 cri.go:89] found id: ""
	I0717 01:59:37.706904   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.706916   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:37.706923   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:37.706975   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:37.747539   71929 cri.go:89] found id: ""
	I0717 01:59:37.747567   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.747576   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:37.747581   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:37.747630   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:37.785229   71929 cri.go:89] found id: ""
	I0717 01:59:37.785251   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.785260   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:37.785268   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:37.785333   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:37.840428   71929 cri.go:89] found id: ""
	I0717 01:59:37.840460   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.840471   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:37.840477   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:37.840533   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:37.876888   71929 cri.go:89] found id: ""
	I0717 01:59:37.876916   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.876924   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:37.876932   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:37.876942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:37.926161   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:37.926197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:37.940857   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:37.940885   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:38.019210   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:38.019232   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:38.019245   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:38.112428   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:38.112471   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:40.657215   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:40.670824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:40.670900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:40.704008   71929 cri.go:89] found id: ""
	I0717 01:59:40.704030   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.704040   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:40.704048   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:40.704102   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:40.739544   71929 cri.go:89] found id: ""
	I0717 01:59:40.739576   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.739587   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:40.739595   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:40.739664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:40.773132   71929 cri.go:89] found id: ""
	I0717 01:59:40.773159   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.773169   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:40.773177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:40.773239   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:40.810162   71929 cri.go:89] found id: ""
	I0717 01:59:40.810183   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.810193   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:40.810200   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:40.810256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:40.844797   71929 cri.go:89] found id: ""
	I0717 01:59:40.844829   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.844840   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:40.844847   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:40.844918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:40.884444   71929 cri.go:89] found id: ""
	I0717 01:59:40.884468   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.884476   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:40.884482   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:40.884544   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:40.919413   71929 cri.go:89] found id: ""
	I0717 01:59:40.919437   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.919445   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:40.919451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:40.919505   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:40.961870   71929 cri.go:89] found id: ""
	I0717 01:59:40.961894   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.961902   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:40.961910   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:40.961921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:41.010600   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:41.010638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:41.025557   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:41.025589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:41.100100   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:41.100123   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:41.100135   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:41.185809   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:41.185850   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:39.802297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.802803   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.432998   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.433412   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:40.779796   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:42.781981   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.279014   71522 pod_ready.go:81] duration metric: took 4m0.006085275s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:59:43.279043   71522 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:59:43.279053   71522 pod_ready.go:38] duration metric: took 4m2.008175999s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:59:43.279073   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:59:43.279105   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.279162   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.327674   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:43.327725   71522 cri.go:89] found id: ""
	I0717 01:59:43.327734   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:43.327801   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.332247   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.332303   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.371598   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.371627   71522 cri.go:89] found id: ""
	I0717 01:59:43.371635   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:43.371683   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.377203   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.377265   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.416351   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.416374   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.416380   71522 cri.go:89] found id: ""
	I0717 01:59:43.416389   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:43.416448   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.420909   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.425228   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.425278   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.472117   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.472139   71522 cri.go:89] found id: ""
	I0717 01:59:43.472147   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:43.472194   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.476632   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.476698   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.517337   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:43.517360   71522 cri.go:89] found id: ""
	I0717 01:59:43.517369   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:43.517430   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.522437   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.522519   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.564511   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.564530   71522 cri.go:89] found id: ""
	I0717 01:59:43.564537   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:43.564595   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.570357   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.570440   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:43.615389   71522 cri.go:89] found id: ""
	I0717 01:59:43.615418   71522 logs.go:276] 0 containers: []
	W0717 01:59:43.615427   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:43.615433   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:43.615543   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:43.652739   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:43.652764   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:43.652769   71522 cri.go:89] found id: ""
	I0717 01:59:43.652777   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:43.652835   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.657323   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.661682   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:43.661702   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.714396   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:43.714434   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.761072   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:43.761110   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:43.825934   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:43.825963   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.871287   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:43.871316   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.907488   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:43.907517   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.949876   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:43.949903   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:44.093084   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:44.093116   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:44.153161   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:44.153206   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:44.197219   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:44.197249   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:44.242441   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:44.242478   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:44.288622   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.288646   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.839680   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.839712   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.854119   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:44.854145   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.725542   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:43.739304   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.739379   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.776754   71929 cri.go:89] found id: ""
	I0717 01:59:43.776783   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.776795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:43.776802   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.776863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.819729   71929 cri.go:89] found id: ""
	I0717 01:59:43.819756   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.819767   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:43.819774   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.819828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.860283   71929 cri.go:89] found id: ""
	I0717 01:59:43.860311   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.860322   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:43.860329   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.860391   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.898684   71929 cri.go:89] found id: ""
	I0717 01:59:43.898712   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.898722   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:43.898729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.898788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.942996   71929 cri.go:89] found id: ""
	I0717 01:59:43.943019   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.943026   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:43.943031   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.943077   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.981799   71929 cri.go:89] found id: ""
	I0717 01:59:43.981828   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.981839   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:43.981846   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.981903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:44.018222   71929 cri.go:89] found id: ""
	I0717 01:59:44.018252   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.018262   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:44.018267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:44.018326   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:44.056264   71929 cri.go:89] found id: ""
	I0717 01:59:44.056293   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.056304   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:44.056315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.056334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.172061   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:44.172108   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:44.219597   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:44.219627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:44.272299   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.272334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.287811   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:44.287848   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:44.379183   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:46.879529   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:46.893142   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:46.893207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:46.929073   71929 cri.go:89] found id: ""
	I0717 01:59:46.929101   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.929113   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:46.929121   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:46.929173   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:46.963697   71929 cri.go:89] found id: ""
	I0717 01:59:46.963725   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.963733   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:46.963739   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:46.963798   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.000697   71929 cri.go:89] found id: ""
	I0717 01:59:47.000730   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.000747   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:47.000752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.000804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.037270   71929 cri.go:89] found id: ""
	I0717 01:59:47.037304   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.037316   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:47.037323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.037382   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.072210   71929 cri.go:89] found id: ""
	I0717 01:59:47.072238   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.072249   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:47.072256   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.072321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.108404   71929 cri.go:89] found id: ""
	I0717 01:59:47.108432   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.108443   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:47.108451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.108535   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.146122   71929 cri.go:89] found id: ""
	I0717 01:59:47.146151   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.146162   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.146169   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:47.146225   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:47.187418   71929 cri.go:89] found id: ""
	I0717 01:59:47.187446   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.187455   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:47.187466   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:47.187481   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:47.201023   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:47.201053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:47.269851   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:47.269878   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.269894   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:47.356417   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:47.356456   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.803326   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:46.302939   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:45.433688   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.933271   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:49.934222   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.403005   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:47.420984   71522 api_server.go:72] duration metric: took 4m13.369710312s to wait for apiserver process to appear ...
	I0717 01:59:47.421011   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:59:47.421065   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:47.421128   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:47.465800   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:47.465830   71522 cri.go:89] found id: ""
	I0717 01:59:47.465838   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:47.465890   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.470561   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:47.470628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:47.513302   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:47.513321   71522 cri.go:89] found id: ""
	I0717 01:59:47.513328   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:47.513373   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.517668   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:47.517720   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.563466   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:47.563491   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:47.563495   71522 cri.go:89] found id: ""
	I0717 01:59:47.563502   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:47.563563   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.568058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.572381   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.572432   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.618919   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:47.618944   71522 cri.go:89] found id: ""
	I0717 01:59:47.618953   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:47.619014   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.623475   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.623525   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.662294   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:47.662321   71522 cri.go:89] found id: ""
	I0717 01:59:47.662329   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:47.662384   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.666740   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.666806   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.708962   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:47.708990   71522 cri.go:89] found id: ""
	I0717 01:59:47.708999   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:47.709058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.713551   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.713628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.750766   71522 cri.go:89] found id: ""
	I0717 01:59:47.750797   71522 logs.go:276] 0 containers: []
	W0717 01:59:47.750807   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.750814   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:47.750878   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:47.786664   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:47.786687   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:47.786692   71522 cri.go:89] found id: ""
	I0717 01:59:47.786699   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:47.786761   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.791460   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.795553   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.795576   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:48.298229   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:48.298271   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:48.313542   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:48.313573   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:48.429625   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:48.429663   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:48.475651   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:48.475677   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:48.514075   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:48.514101   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:48.550152   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:48.550182   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:48.592743   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:48.592771   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:48.652433   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:48.652464   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:48.699763   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:48.699796   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:48.737467   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:48.737504   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:48.788389   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:48.788425   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:48.842323   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:48.842357   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:48.900716   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:48.900746   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:47.397763   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:47.397791   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:49.954670   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:49.968840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:49.968898   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:50.003598   71929 cri.go:89] found id: ""
	I0717 01:59:50.003635   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.003646   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:50.003654   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:50.003714   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:50.040494   71929 cri.go:89] found id: ""
	I0717 01:59:50.040546   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.040558   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:50.040564   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:50.040624   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:50.074921   71929 cri.go:89] found id: ""
	I0717 01:59:50.074950   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.074959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:50.074965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:50.075015   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:50.117002   71929 cri.go:89] found id: ""
	I0717 01:59:50.117030   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.117041   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:50.117049   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:50.117106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:50.163026   71929 cri.go:89] found id: ""
	I0717 01:59:50.163052   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.163063   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:50.163071   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:50.163129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:50.197709   71929 cri.go:89] found id: ""
	I0717 01:59:50.197738   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.197749   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:50.197757   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:50.197838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:50.237776   71929 cri.go:89] found id: ""
	I0717 01:59:50.237808   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.237819   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:50.237827   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:50.237886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:50.275147   71929 cri.go:89] found id: ""
	I0717 01:59:50.275179   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.275189   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:50.275201   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:50.275215   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:50.329025   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:50.329057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:50.342745   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:50.342777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:50.417792   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:50.417817   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:50.417829   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:50.495288   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:50.495322   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:48.306102   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:50.804255   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:52.433248   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:54.931595   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:51.447495   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:59:51.452186   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:59:51.453112   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:59:51.453137   71522 api_server.go:131] duration metric: took 4.032118004s to wait for apiserver health ...
	I0717 01:59:51.453146   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:59:51.453170   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:51.453215   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:51.491272   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:51.491297   71522 cri.go:89] found id: ""
	I0717 01:59:51.491305   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:51.491365   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.495747   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:51.495795   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:51.538807   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:51.538830   71522 cri.go:89] found id: ""
	I0717 01:59:51.538838   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:51.538891   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.543454   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:51.543512   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:51.586258   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:51.586292   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:51.586296   71522 cri.go:89] found id: ""
	I0717 01:59:51.586306   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:51.586360   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.590446   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.594867   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:51.594936   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:51.636079   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:51.636101   71522 cri.go:89] found id: ""
	I0717 01:59:51.636108   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:51.636159   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.640225   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:51.640283   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:51.676395   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:51.676422   71522 cri.go:89] found id: ""
	I0717 01:59:51.676432   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:51.676496   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.680974   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:51.681043   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:51.720449   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.720476   71522 cri.go:89] found id: ""
	I0717 01:59:51.720483   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:51.720527   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.724704   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:51.724779   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:51.762892   71522 cri.go:89] found id: ""
	I0717 01:59:51.762923   71522 logs.go:276] 0 containers: []
	W0717 01:59:51.762932   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:51.762939   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:51.762986   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:51.803675   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.803702   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.803707   71522 cri.go:89] found id: ""
	I0717 01:59:51.803714   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:51.803807   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.808188   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.812046   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:51.812065   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:51.855800   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:51.855832   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.917804   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:51.917833   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.958797   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:51.958827   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.997003   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:51.997034   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:52.118345   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:52.118381   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:52.174308   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:52.174344   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:52.578823   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:52.578857   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:52.619962   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:52.619994   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:52.667564   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:52.667593   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:52.714716   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:52.714747   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:52.774123   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:52.774171   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:52.788399   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:52.788432   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:52.839796   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:52.839828   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:55.388404   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:59:55.388441   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.388448   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.388453   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.388458   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.388465   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.388469   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.388473   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.388484   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.388491   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.388499   71522 system_pods.go:74] duration metric: took 3.93534618s to wait for pod list to return data ...
	I0717 01:59:55.388509   71522 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:59:55.390798   71522 default_sa.go:45] found service account: "default"
	I0717 01:59:55.390829   71522 default_sa.go:55] duration metric: took 2.313714ms for default service account to be created ...
	I0717 01:59:55.390840   71522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:59:55.399028   71522 system_pods.go:86] 9 kube-system pods found
	I0717 01:59:55.399049   71522 system_pods.go:89] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.399054   71522 system_pods.go:89] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.399059   71522 system_pods.go:89] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.399063   71522 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.399068   71522 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.399072   71522 system_pods.go:89] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.399076   71522 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.399083   71522 system_pods.go:89] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.399090   71522 system_pods.go:89] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.399099   71522 system_pods.go:126] duration metric: took 8.253468ms to wait for k8s-apps to be running ...
	I0717 01:59:55.399108   71522 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:59:55.399152   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:59:55.417081   71522 system_svc.go:56] duration metric: took 17.965716ms WaitForService to wait for kubelet
	I0717 01:59:55.417109   71522 kubeadm.go:582] duration metric: took 4m21.36584166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:59:55.417130   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:59:55.420078   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:59:55.420099   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:59:55.420109   71522 node_conditions.go:105] duration metric: took 2.974324ms to run NodePressure ...
	I0717 01:59:55.420119   71522 start.go:241] waiting for startup goroutines ...
	I0717 01:59:55.420126   71522 start.go:246] waiting for cluster config update ...
	I0717 01:59:55.420136   71522 start.go:255] writing updated cluster config ...
	I0717 01:59:55.420406   71522 ssh_runner.go:195] Run: rm -f paused
	I0717 01:59:55.470793   71522 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:59:55.472960   71522 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-738184" cluster and "default" namespace by default
	I0717 01:59:53.036151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:53.049820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:53.049879   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:53.087144   71929 cri.go:89] found id: ""
	I0717 01:59:53.087175   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.087189   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:53.087195   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:53.087253   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:53.123135   71929 cri.go:89] found id: ""
	I0717 01:59:53.123164   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.123175   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:53.123191   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:53.123254   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:53.157887   71929 cri.go:89] found id: ""
	I0717 01:59:53.157912   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.157922   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:53.157927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:53.158004   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:53.201002   71929 cri.go:89] found id: ""
	I0717 01:59:53.201033   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.201045   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:53.201054   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:53.201115   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:53.236159   71929 cri.go:89] found id: ""
	I0717 01:59:53.236188   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.236198   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:53.236204   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:53.236258   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:53.277585   71929 cri.go:89] found id: ""
	I0717 01:59:53.277616   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.277627   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:53.277634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:53.277694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:53.322722   71929 cri.go:89] found id: ""
	I0717 01:59:53.322747   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.322758   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:53.322765   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:53.322824   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:53.364112   71929 cri.go:89] found id: ""
	I0717 01:59:53.364138   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.364149   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:53.364159   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:53.364172   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:53.418701   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:53.418739   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:53.435004   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:53.435030   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:53.511254   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:53.511274   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:53.511287   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:53.587967   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:53.588003   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:56.130773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:56.144742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:56.144811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:56.180267   71929 cri.go:89] found id: ""
	I0717 01:59:56.180295   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.180306   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:56.180313   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:56.180373   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:56.217223   71929 cri.go:89] found id: ""
	I0717 01:59:56.217252   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.217263   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:56.217269   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:56.217334   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:56.251714   71929 cri.go:89] found id: ""
	I0717 01:59:56.251738   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.251745   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:56.251752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:56.251805   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:56.292557   71929 cri.go:89] found id: ""
	I0717 01:59:56.292589   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.292597   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:56.292603   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:56.292653   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:56.332463   71929 cri.go:89] found id: ""
	I0717 01:59:56.332491   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.332501   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:56.332508   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:56.332562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:56.372155   71929 cri.go:89] found id: ""
	I0717 01:59:56.372180   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.372189   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:56.372197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:56.372255   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:56.415768   71929 cri.go:89] found id: ""
	I0717 01:59:56.415794   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.415806   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:56.415813   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:56.415871   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:56.456920   71929 cri.go:89] found id: ""
	I0717 01:59:56.456951   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.456959   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:56.456968   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:56.456978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:56.508932   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:56.508965   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:56.522496   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:56.522531   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:56.596839   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:56.596857   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:56.596870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:56.679237   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:56.679271   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:53.303565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:55.803725   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:57.806129   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:56.933245   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.432536   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.220084   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:59.233108   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:59.233182   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:59.266796   71929 cri.go:89] found id: ""
	I0717 01:59:59.266827   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.266838   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:59.266845   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:59.266909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:59.297992   71929 cri.go:89] found id: ""
	I0717 01:59:59.298017   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.298026   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:59.298032   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:59.298087   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:59.331953   71929 cri.go:89] found id: ""
	I0717 01:59:59.331982   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.331993   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:59.331999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:59.332069   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:59.368912   71929 cri.go:89] found id: ""
	I0717 01:59:59.368939   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.368948   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:59.368954   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:59.369002   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:59.402886   71929 cri.go:89] found id: ""
	I0717 01:59:59.402911   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.402920   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:59.402926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:59.402982   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:59.441227   71929 cri.go:89] found id: ""
	I0717 01:59:59.441249   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.441257   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:59.441263   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:59.441322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:59.479154   71929 cri.go:89] found id: ""
	I0717 01:59:59.479191   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.479213   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:59.479222   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:59.479286   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:59.516259   71929 cri.go:89] found id: ""
	I0717 01:59:59.516299   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.516309   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:59.516319   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:59.516332   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:59.596352   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:59.596385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:59.639712   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:59.639744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:59.691399   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:59.691444   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:59.706618   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:59.706648   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:59.778875   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.279246   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:02.293212   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:02.293284   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:02.330759   71929 cri.go:89] found id: ""
	I0717 02:00:02.330786   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.330795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:02.330800   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:02.330848   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:02.366257   71929 cri.go:89] found id: ""
	I0717 02:00:02.366287   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.366298   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:02.366305   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:02.366368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:00.303868   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.311063   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:01.432671   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:03.433059   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.404321   71929 cri.go:89] found id: ""
	I0717 02:00:02.404348   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.404358   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:02.404364   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:02.404432   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:02.444297   71929 cri.go:89] found id: ""
	I0717 02:00:02.444326   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.444342   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:02.444349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:02.444406   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:02.478433   71929 cri.go:89] found id: ""
	I0717 02:00:02.478466   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.478477   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:02.478483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:02.478530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:02.515519   71929 cri.go:89] found id: ""
	I0717 02:00:02.515551   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.515560   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:02.515566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:02.515618   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:02.551006   71929 cri.go:89] found id: ""
	I0717 02:00:02.551030   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.551038   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:02.551044   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:02.551110   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:02.588312   71929 cri.go:89] found id: ""
	I0717 02:00:02.588345   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.588356   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:02.588367   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:02.588381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:02.641900   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:02.641932   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:02.656851   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:02.656896   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:02.728286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.728315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:02.728327   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:02.806807   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:02.806847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.355196   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:05.369148   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:05.369231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:05.405012   71929 cri.go:89] found id: ""
	I0717 02:00:05.405045   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.405057   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:05.405068   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:05.405132   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:05.450524   71929 cri.go:89] found id: ""
	I0717 02:00:05.450564   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.450575   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:05.450582   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:05.450637   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:05.487503   71929 cri.go:89] found id: ""
	I0717 02:00:05.487533   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.487544   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:05.487553   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:05.487634   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:05.522607   71929 cri.go:89] found id: ""
	I0717 02:00:05.522635   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.522650   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:05.522656   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:05.522703   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:05.558091   71929 cri.go:89] found id: ""
	I0717 02:00:05.558120   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.558131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:05.558138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:05.558192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:05.594540   71929 cri.go:89] found id: ""
	I0717 02:00:05.594587   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.594598   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:05.594605   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:05.594668   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:05.631783   71929 cri.go:89] found id: ""
	I0717 02:00:05.631807   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.631818   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:05.631825   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:05.631886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:05.667494   71929 cri.go:89] found id: ""
	I0717 02:00:05.667523   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.667532   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:05.667543   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:05.667559   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:05.681348   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:05.681373   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:05.747143   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:05.747165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:05.747176   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:05.829639   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:05.829674   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.881984   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:05.882013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:04.803913   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.302068   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:05.434869   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.435174   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:09.931879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:08.435873   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:08.449840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:08.449901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:08.489613   71929 cri.go:89] found id: ""
	I0717 02:00:08.489663   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.489675   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:08.489684   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:08.489751   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:08.526604   71929 cri.go:89] found id: ""
	I0717 02:00:08.526635   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.526645   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:08.526660   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:08.526717   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:08.563202   71929 cri.go:89] found id: ""
	I0717 02:00:08.563227   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.563234   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:08.563240   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:08.563299   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:08.598336   71929 cri.go:89] found id: ""
	I0717 02:00:08.598365   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.598376   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:08.598383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:08.598441   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:08.632626   71929 cri.go:89] found id: ""
	I0717 02:00:08.632660   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.632671   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:08.632678   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:08.632739   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:08.667951   71929 cri.go:89] found id: ""
	I0717 02:00:08.667977   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.667993   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:08.668001   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:08.668059   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:08.702106   71929 cri.go:89] found id: ""
	I0717 02:00:08.702135   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.702146   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:08.702153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:08.702212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:08.733469   71929 cri.go:89] found id: ""
	I0717 02:00:08.733491   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.733499   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:08.733508   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:08.733518   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:08.787930   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:08.787966   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:08.802761   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:08.802795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:08.878115   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:08.878138   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:08.878149   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:08.962509   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:08.962543   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:11.503151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:11.518019   71929 kubeadm.go:597] duration metric: took 4m3.576613508s to restartPrimaryControlPlane
	W0717 02:00:11.518087   71929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:00:11.518113   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:00:11.970514   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:11.986794   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:00:11.997382   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:00:12.006789   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:00:12.006816   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:00:12.006867   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:00:12.015864   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:00:12.015921   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:00:12.025239   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:00:12.034315   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:00:12.034373   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:00:12.043533   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.052344   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:00:12.052393   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.061290   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:00:12.070311   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:00:12.070375   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:00:12.080404   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:00:12.318084   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:00:09.303502   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.803893   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.933539   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:14.433949   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:13.804007   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.303079   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.932416   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.932721   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.303306   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:20.306811   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:22.803374   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:21.433157   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:23.433283   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:24.805822   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.301985   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:25.931740   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.934346   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:29.302199   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:31.302607   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:30.433033   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:32.434743   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:34.933166   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:33.802140   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:35.803338   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:36.933672   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:39.432879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:38.302050   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:40.803322   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:41.932491   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:44.436201   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:43.302028   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:45.801979   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.303644   71146 pod_ready.go:81] duration metric: took 4m0.007411484s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 02:00:47.303668   71146 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 02:00:47.303678   71146 pod_ready.go:38] duration metric: took 4m7.053721739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:00:47.303694   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:00:47.303725   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:47.303791   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:47.365247   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:47.365272   71146 cri.go:89] found id: ""
	I0717 02:00:47.365279   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:47.365339   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.370201   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:47.370268   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:47.416627   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:47.416652   71146 cri.go:89] found id: ""
	I0717 02:00:47.416663   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:47.416731   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.421295   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:47.421454   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:47.463532   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.463556   71146 cri.go:89] found id: ""
	I0717 02:00:47.463564   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:47.463626   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.468291   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:47.468414   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:47.504328   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:47.504354   71146 cri.go:89] found id: ""
	I0717 02:00:47.504362   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:47.504445   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.508821   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:47.508880   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:47.550970   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.550996   71146 cri.go:89] found id: ""
	I0717 02:00:47.551006   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:47.551069   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.555974   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:47.556045   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:47.609884   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:47.609903   71146 cri.go:89] found id: ""
	I0717 02:00:47.609910   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:47.609968   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.615544   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:47.615603   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:47.653071   71146 cri.go:89] found id: ""
	I0717 02:00:47.653099   71146 logs.go:276] 0 containers: []
	W0717 02:00:47.653110   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:47.653117   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:47.653163   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:47.690462   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.690485   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:47.690490   71146 cri.go:89] found id: ""
	I0717 02:00:47.690498   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:47.690545   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.695196   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.699099   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:47.699117   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:47.816750   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:47.816782   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:46.932764   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:49.432402   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.869306   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:47.869341   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.906717   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:47.906755   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.944125   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:47.944152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.978632   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:47.978664   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:48.482628   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:48.482660   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:48.538252   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:48.538300   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:48.553011   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:48.553038   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:48.607632   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:48.607666   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:48.646122   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:48.646151   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:48.689948   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:48.689980   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:48.738285   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:48.738334   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.290996   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:51.308850   71146 api_server.go:72] duration metric: took 4m18.27461618s to wait for apiserver process to appear ...
	I0717 02:00:51.308873   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:00:51.308907   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:51.308967   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:51.350827   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.350857   71146 cri.go:89] found id: ""
	I0717 02:00:51.350866   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:51.350930   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.355308   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:51.355369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:51.393804   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.393831   71146 cri.go:89] found id: ""
	I0717 02:00:51.393840   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:51.393897   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.398144   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:51.398201   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:51.437974   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:51.437991   71146 cri.go:89] found id: ""
	I0717 02:00:51.437998   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:51.438044   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.442318   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:51.442382   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:51.478462   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.478481   71146 cri.go:89] found id: ""
	I0717 02:00:51.478489   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:51.478534   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.482624   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:51.482672   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:51.526089   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.526114   71146 cri.go:89] found id: ""
	I0717 02:00:51.526123   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:51.526170   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.530855   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:51.530923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:51.568875   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.568899   71146 cri.go:89] found id: ""
	I0717 02:00:51.568908   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:51.568972   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.573300   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:51.573369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:51.615775   71146 cri.go:89] found id: ""
	I0717 02:00:51.615800   71146 logs.go:276] 0 containers: []
	W0717 02:00:51.615809   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:51.615815   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:51.615876   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:51.658100   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:51.658124   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:51.658130   71146 cri.go:89] found id: ""
	I0717 02:00:51.658138   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:51.658183   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.663030   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.667348   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:51.667372   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.715502   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:51.715534   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.763431   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:51.763457   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.805523   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:51.805553   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.859660   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:51.859692   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:51.963831   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:51.963858   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:51.978152   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:51.978179   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:52.023897   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:52.023926   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:52.062193   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:52.062218   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:52.098487   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:52.098518   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:52.135733   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:52.135758   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:52.562245   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:52.562279   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:52.624258   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:52.624288   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:51.434060   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:53.933730   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:55.176270   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 02:00:55.180760   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 02:00:55.181928   71146 api_server.go:141] control plane version: v1.30.2
	I0717 02:00:55.181947   71146 api_server.go:131] duration metric: took 3.873068874s to wait for apiserver health ...
	I0717 02:00:55.181955   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:00:55.181975   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:55.182017   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:55.218028   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:55.218059   71146 cri.go:89] found id: ""
	I0717 02:00:55.218068   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:55.218125   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.222841   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:55.222911   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:55.265613   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.265638   71146 cri.go:89] found id: ""
	I0717 02:00:55.265647   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:55.265699   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.269866   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:55.269923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:55.306363   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:55.306390   71146 cri.go:89] found id: ""
	I0717 02:00:55.306400   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:55.306457   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.310843   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:55.310901   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:55.354417   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:55.354439   71146 cri.go:89] found id: ""
	I0717 02:00:55.354449   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:55.354503   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.358988   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:55.359038   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:55.396457   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.396480   71146 cri.go:89] found id: ""
	I0717 02:00:55.396488   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:55.396532   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.401185   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:55.401244   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:55.438249   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:55.438276   71146 cri.go:89] found id: ""
	I0717 02:00:55.438286   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:55.438344   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.442967   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:55.443048   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:55.484173   71146 cri.go:89] found id: ""
	I0717 02:00:55.484197   71146 logs.go:276] 0 containers: []
	W0717 02:00:55.484205   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:55.484210   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:55.484288   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:55.525757   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.525780   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.525784   71146 cri.go:89] found id: ""
	I0717 02:00:55.525790   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:55.525842   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.530253   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.534253   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:55.534275   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.578993   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:55.579018   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.622746   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:55.622771   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.660900   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:55.660931   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.709803   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:55.709833   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:56.092339   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:56.092390   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:56.130951   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:56.130976   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:56.186113   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:56.186152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:56.229794   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:56.229839   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:56.285798   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:56.285845   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:56.300391   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:56.300421   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:56.425621   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:56.425653   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:56.478853   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:56.478882   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:59.026000   71146 system_pods.go:59] 8 kube-system pods found
	I0717 02:00:59.026028   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.026033   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.026036   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.026039   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.026042   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.026045   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.026051   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.026054   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.026062   71146 system_pods.go:74] duration metric: took 3.844102201s to wait for pod list to return data ...
	I0717 02:00:59.026069   71146 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:00:59.028810   71146 default_sa.go:45] found service account: "default"
	I0717 02:00:59.028831   71146 default_sa.go:55] duration metric: took 2.756364ms for default service account to be created ...
	I0717 02:00:59.028838   71146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:00:59.036427   71146 system_pods.go:86] 8 kube-system pods found
	I0717 02:00:59.036457   71146 system_pods.go:89] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.036466   71146 system_pods.go:89] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.036474   71146 system_pods.go:89] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.036482   71146 system_pods.go:89] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.036489   71146 system_pods.go:89] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.036499   71146 system_pods.go:89] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.036509   71146 system_pods.go:89] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.036519   71146 system_pods.go:89] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.036532   71146 system_pods.go:126] duration metric: took 7.688074ms to wait for k8s-apps to be running ...
	I0717 02:00:59.036542   71146 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:00:59.036594   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:59.052023   71146 system_svc.go:56] duration metric: took 15.474441ms WaitForService to wait for kubelet
	I0717 02:00:59.052049   71146 kubeadm.go:582] duration metric: took 4m26.017816269s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:00:59.052073   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:00:59.054763   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:00:59.054784   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 02:00:59.054795   71146 node_conditions.go:105] duration metric: took 2.714349ms to run NodePressure ...
	I0717 02:00:59.054805   71146 start.go:241] waiting for startup goroutines ...
	I0717 02:00:59.054811   71146 start.go:246] waiting for cluster config update ...
	I0717 02:00:59.054824   71146 start.go:255] writing updated cluster config ...
	I0717 02:00:59.055069   71146 ssh_runner.go:195] Run: rm -f paused
	I0717 02:00:59.101243   71146 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 02:00:59.103341   71146 out.go:177] * Done! kubectl is now configured to use "embed-certs-940222" cluster and "default" namespace by default
	I0717 02:00:56.432853   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:58.433589   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:00.932978   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:02.933289   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:05.433003   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:07.433470   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:09.433795   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:11.933112   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:14.433274   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:16.932102   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:18.932904   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:20.933023   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:23.433153   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:24.926132   71603 pod_ready.go:81] duration metric: took 4m0.000155151s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	E0717 02:01:24.926165   71603 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 02:01:24.926185   71603 pod_ready.go:38] duration metric: took 4m39.916322674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:01:24.926214   71603 kubeadm.go:597] duration metric: took 5m27.432375382s to restartPrimaryControlPlane
	W0717 02:01:24.926303   71603 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:01:24.926339   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:01:51.790820   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.86445583s)
	I0717 02:01:51.790901   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:01:51.812968   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:01:51.835689   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:01:51.848832   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:01:51.848859   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 02:01:51.848911   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:01:51.876554   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:01:51.876620   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:01:51.891580   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:01:51.901279   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:01:51.901328   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:01:51.910994   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.920959   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:01:51.921020   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.931039   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:01:51.940496   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:01:51.940549   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:01:51.950455   71603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:01:51.999712   71603 kubeadm.go:310] W0717 02:01:51.966911    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.000573   71603 kubeadm.go:310] W0717 02:01:51.967749    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.132406   71603 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:02:01.065590   71603 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 02:02:01.065670   71603 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:01.065761   71603 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:01.065909   71603 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:01.066049   71603 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 02:02:01.066124   71603 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:01.067867   71603 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:01.067966   71603 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:01.068043   71603 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:01.068139   71603 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:01.068210   71603 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:01.068310   71603 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:01.068391   71603 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:01.068471   71603 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:01.068523   71603 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:01.068585   71603 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:01.068650   71603 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:01.068683   71603 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:01.068752   71603 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:01.068822   71603 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:01.068906   71603 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 02:02:01.068970   71603 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:01.069057   71603 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:01.069157   71603 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:01.069271   71603 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:01.069369   71603 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:01.070772   71603 out.go:204]   - Booting up control plane ...
	I0717 02:02:01.070883   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:01.070981   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:01.071088   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:01.071206   71603 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:01.071311   71603 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:01.071365   71603 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:01.071497   71603 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 02:02:01.071557   71603 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 02:02:01.071608   71603 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044041ms
	I0717 02:02:01.071663   71603 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 02:02:01.071725   71603 kubeadm.go:310] [api-check] The API server is healthy after 5.501034024s
	I0717 02:02:01.071823   71603 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 02:02:01.071926   71603 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 02:02:01.071975   71603 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 02:02:01.072168   71603 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-391501 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 02:02:01.072238   71603 kubeadm.go:310] [bootstrap-token] Using token: jhnlja.0tmcz1ce1lkie6op
	I0717 02:02:01.073965   71603 out.go:204]   - Configuring RBAC rules ...
	I0717 02:02:01.074091   71603 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 02:02:01.074223   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 02:02:01.074390   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 02:02:01.074597   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 02:02:01.074766   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 02:02:01.074887   71603 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 02:02:01.075058   71603 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 02:02:01.075126   71603 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 02:02:01.075195   71603 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 02:02:01.075204   71603 kubeadm.go:310] 
	I0717 02:02:01.075255   71603 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 02:02:01.075262   71603 kubeadm.go:310] 
	I0717 02:02:01.075372   71603 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 02:02:01.075386   71603 kubeadm.go:310] 
	I0717 02:02:01.075419   71603 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 02:02:01.075498   71603 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 02:02:01.075582   71603 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 02:02:01.075604   71603 kubeadm.go:310] 
	I0717 02:02:01.075687   71603 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 02:02:01.075697   71603 kubeadm.go:310] 
	I0717 02:02:01.075759   71603 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 02:02:01.075771   71603 kubeadm.go:310] 
	I0717 02:02:01.075834   71603 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 02:02:01.075936   71603 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 02:02:01.076034   71603 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 02:02:01.076043   71603 kubeadm.go:310] 
	I0717 02:02:01.076142   71603 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 02:02:01.076248   71603 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 02:02:01.076256   71603 kubeadm.go:310] 
	I0717 02:02:01.076379   71603 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076541   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 \
	I0717 02:02:01.076582   71603 kubeadm.go:310] 	--control-plane 
	I0717 02:02:01.076600   71603 kubeadm.go:310] 
	I0717 02:02:01.076708   71603 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 02:02:01.076719   71603 kubeadm.go:310] 
	I0717 02:02:01.076819   71603 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076955   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 
	I0717 02:02:01.076972   71603 cni.go:84] Creating CNI manager for ""
	I0717 02:02:01.076981   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 02:02:01.078801   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 02:02:01.080151   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 02:02:01.093210   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 02:02:01.116656   71603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 02:02:01.116712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.116756   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-391501 minikube.k8s.io/updated_at=2024_07_17T02_02_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=no-preload-391501 minikube.k8s.io/primary=true
	I0717 02:02:01.314407   71603 ops.go:34] apiserver oom_adj: -16
	I0717 02:02:01.314467   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.814693   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.315439   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.814676   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.314734   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.814702   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.315450   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.815112   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.315144   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.814712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.921356   71603 kubeadm.go:1113] duration metric: took 4.80469441s to wait for elevateKubeSystemPrivileges
	I0717 02:02:05.921398   71603 kubeadm.go:394] duration metric: took 6m8.48278775s to StartCluster
	I0717 02:02:05.921420   71603 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.921508   71603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 02:02:05.923844   71603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.924156   71603 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 02:02:05.924254   71603 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 02:02:05.924328   71603 addons.go:69] Setting storage-provisioner=true in profile "no-preload-391501"
	I0717 02:02:05.924357   71603 addons.go:234] Setting addon storage-provisioner=true in "no-preload-391501"
	I0717 02:02:05.924355   71603 addons.go:69] Setting default-storageclass=true in profile "no-preload-391501"
	I0717 02:02:05.924364   71603 addons.go:69] Setting metrics-server=true in profile "no-preload-391501"
	I0717 02:02:05.924391   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:02:05.924398   71603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-391501"
	I0717 02:02:05.924404   71603 addons.go:234] Setting addon metrics-server=true in "no-preload-391501"
	W0717 02:02:05.924414   71603 addons.go:243] addon metrics-server should already be in state true
	W0717 02:02:05.924368   71603 addons.go:243] addon storage-provisioner should already be in state true
	I0717 02:02:05.924447   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924460   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924801   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924827   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924834   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924850   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924874   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924912   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.926034   71603 out.go:177] * Verifying Kubernetes components...
	I0717 02:02:05.927316   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:02:05.941502   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0717 02:02:05.941716   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0717 02:02:05.941969   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942299   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942492   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942509   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942873   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942902   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942933   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943175   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.943250   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943555   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0717 02:02:05.943829   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.943862   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.943996   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.944648   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.944672   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.945037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.945577   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.945613   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.947058   71603 addons.go:234] Setting addon default-storageclass=true in "no-preload-391501"
	W0717 02:02:05.947076   71603 addons.go:243] addon default-storageclass should already be in state true
	I0717 02:02:05.947103   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.947419   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.947447   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.960183   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0717 02:02:05.960662   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.961220   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.961249   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.961532   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.961777   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.962531   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0717 02:02:05.963063   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.964115   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.964120   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.964146   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.965195   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.965777   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0717 02:02:05.965802   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.965845   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.966114   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.966615   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.966635   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.966706   71603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 02:02:05.967037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.967228   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.968069   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 02:02:05.968101   71603 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 02:02:05.968121   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.969421   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.971055   71603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 02:02:05.972019   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.972494   71603 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:05.972515   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 02:02:05.972533   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.972622   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.972646   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.973122   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.973289   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.973415   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.973638   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.975702   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976091   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.976110   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976376   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.976553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.976717   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.976866   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.983061   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44967
	I0717 02:02:05.983397   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.983851   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.983867   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.984150   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.984319   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.985757   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.985973   71603 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:05.985985   71603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 02:02:05.986000   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.989238   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989627   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.989647   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989890   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.990056   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.990212   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.990412   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:06.238449   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 02:02:06.272217   71603 node_ready.go:35] waiting up to 6m0s for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281012   71603 node_ready.go:49] node "no-preload-391501" has status "Ready":"True"
	I0717 02:02:06.281031   71603 node_ready.go:38] duration metric: took 8.787329ms for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281040   71603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:06.297250   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:06.386971   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 02:02:06.386995   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 02:02:06.439822   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:06.460362   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 02:02:06.460391   71603 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 02:02:06.468640   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:06.551454   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:06.551482   71603 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 02:02:06.727518   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:07.338701   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338778   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.338874   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338900   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339119   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339217   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339230   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339273   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339291   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339301   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339314   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339240   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339386   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339575   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339592   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339648   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339711   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339736   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.357948   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.357966   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.358197   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.358212   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.694612   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.694690   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695028   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.695109   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695122   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695148   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.695160   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695404   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695421   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695432   71603 addons.go:475] Verifying addon metrics-server=true in "no-preload-391501"
	I0717 02:02:07.698298   71603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 02:02:08.622411   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:02:08.622531   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:02:08.624111   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:08.624168   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:08.624265   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:08.624391   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:08.624526   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:08.624604   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:08.626394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:08.626478   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:08.626574   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:08.626657   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:08.626735   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:08.626830   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:08.626909   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:08.627001   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:08.627095   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:08.627203   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:08.627325   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:08.627392   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:08.627469   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:08.627573   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:08.627663   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:08.627753   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:08.627836   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:08.627997   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:08.628107   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:08.628179   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:08.628272   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:08.630262   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:08.630372   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:08.630477   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:08.630594   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:08.630729   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:08.630960   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:08.631020   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:08.631099   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631293   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631394   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631648   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631748   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631925   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632050   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632253   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632327   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632528   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632546   71929 kubeadm.go:310] 
	I0717 02:02:08.632611   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:02:08.632671   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:02:08.632689   71929 kubeadm.go:310] 
	I0717 02:02:08.632729   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:02:08.632772   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:02:08.632902   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:02:08.632914   71929 kubeadm.go:310] 
	I0717 02:02:08.633001   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:02:08.633030   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:02:08.633075   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:02:08.633092   71929 kubeadm.go:310] 
	I0717 02:02:08.633204   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:02:08.633281   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:02:08.633306   71929 kubeadm.go:310] 
	I0717 02:02:08.633450   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:02:08.633535   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:02:08.633597   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:02:08.633668   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:02:08.633697   71929 kubeadm.go:310] 
	W0717 02:02:08.633780   71929 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 02:02:08.633821   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:02:09.101394   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:09.119918   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:02:09.130974   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:02:09.131002   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:02:09.131046   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:02:09.142720   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:02:09.142790   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:02:09.154990   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:02:09.166317   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:02:09.166379   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:02:09.176756   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.186639   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:02:09.186697   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.196778   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:02:09.206420   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:02:09.206469   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:02:09.216325   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:02:09.293311   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:09.293457   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:09.442386   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:09.442594   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:09.442736   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:09.618387   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:07.699645   71603 addons.go:510] duration metric: took 1.775390854s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 02:02:08.305410   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:09.620394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:09.620496   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:09.620593   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:09.620691   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:09.620791   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:09.620909   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:09.621004   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:09.621117   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:09.621364   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:09.621778   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:09.622072   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:09.622135   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:09.622225   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:09.990964   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:10.434990   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:10.579785   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:10.723319   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:10.746923   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:10.748370   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:10.748460   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:10.888855   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:10.890727   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:10.890860   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:10.893530   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:10.894934   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:10.896825   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:10.899127   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:10.806868   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:12.804727   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.804754   71603 pod_ready.go:81] duration metric: took 6.507471417s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.804763   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812383   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.812408   71603 pod_ready.go:81] duration metric: took 7.638012ms for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812420   71603 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320241   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.320263   71603 pod_ready.go:81] duration metric: took 507.836128ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320285   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326308   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.326332   71603 pod_ready.go:81] duration metric: took 6.041207ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326341   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331310   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.331338   71603 pod_ready.go:81] duration metric: took 4.988207ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331360   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602634   71603 pod_ready.go:92] pod "kube-proxy-gl7th" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.602677   71603 pod_ready.go:81] duration metric: took 271.310877ms for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602687   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002256   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:14.002282   71603 pod_ready.go:81] duration metric: took 399.588324ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002290   71603 pod_ready.go:38] duration metric: took 7.721240931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:14.002306   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:02:14.002355   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:02:14.016981   71603 api_server.go:72] duration metric: took 8.092789001s to wait for apiserver process to appear ...
	I0717 02:02:14.017007   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:02:14.017026   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 02:02:14.022008   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 02:02:14.022992   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 02:02:14.023010   71603 api_server.go:131] duration metric: took 5.997297ms to wait for apiserver health ...
	I0717 02:02:14.023016   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:02:14.204777   71603 system_pods.go:59] 9 kube-system pods found
	I0717 02:02:14.204806   71603 system_pods.go:61] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.204811   71603 system_pods.go:61] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.204816   71603 system_pods.go:61] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.204819   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.204823   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.204826   71603 system_pods.go:61] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.204829   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.204836   71603 system_pods.go:61] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.204839   71603 system_pods.go:61] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.204847   71603 system_pods.go:74] duration metric: took 181.825073ms to wait for pod list to return data ...
	I0717 02:02:14.204854   71603 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:02:14.402964   71603 default_sa.go:45] found service account: "default"
	I0717 02:02:14.402992   71603 default_sa.go:55] duration metric: took 198.131224ms for default service account to be created ...
	I0717 02:02:14.403005   71603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:02:14.606371   71603 system_pods.go:86] 9 kube-system pods found
	I0717 02:02:14.606408   71603 system_pods.go:89] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.606418   71603 system_pods.go:89] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.606424   71603 system_pods.go:89] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.606430   71603 system_pods.go:89] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.606438   71603 system_pods.go:89] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.606444   71603 system_pods.go:89] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.606450   71603 system_pods.go:89] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.606461   71603 system_pods.go:89] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.606474   71603 system_pods.go:89] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.606486   71603 system_pods.go:126] duration metric: took 203.473728ms to wait for k8s-apps to be running ...
	I0717 02:02:14.606497   71603 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:02:14.606568   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:14.622178   71603 system_svc.go:56] duration metric: took 15.671962ms WaitForService to wait for kubelet
	I0717 02:02:14.622211   71603 kubeadm.go:582] duration metric: took 8.698021688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:02:14.622234   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:02:14.802282   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:02:14.802309   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 02:02:14.802319   71603 node_conditions.go:105] duration metric: took 180.080727ms to run NodePressure ...
	I0717 02:02:14.802330   71603 start.go:241] waiting for startup goroutines ...
	I0717 02:02:14.802337   71603 start.go:246] waiting for cluster config update ...
	I0717 02:02:14.802345   71603 start.go:255] writing updated cluster config ...
	I0717 02:02:14.802613   71603 ssh_runner.go:195] Run: rm -f paused
	I0717 02:02:14.848725   71603 start.go:600] kubectl: 1.30.2, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 02:02:14.850965   71603 out.go:177] * Done! kubectl is now configured to use "no-preload-391501" cluster and "default" namespace by default
	I0717 02:02:50.900829   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:50.901350   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:50.901626   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:55.902558   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:55.902805   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:05.903753   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:05.904033   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:25.905383   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:25.905597   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906576   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:04:05.906960   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906992   71929 kubeadm.go:310] 
	I0717 02:04:05.907049   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:04:05.907133   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:04:05.907182   71929 kubeadm.go:310] 
	I0717 02:04:05.907252   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:04:05.907339   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:04:05.907516   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:04:05.907529   71929 kubeadm.go:310] 
	I0717 02:04:05.907661   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:04:05.907699   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:04:05.907743   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:04:05.907751   71929 kubeadm.go:310] 
	I0717 02:04:05.907907   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:04:05.908043   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:04:05.908053   71929 kubeadm.go:310] 
	I0717 02:04:05.908221   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:04:05.908435   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:04:05.908619   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:04:05.908738   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:04:05.908788   71929 kubeadm.go:310] 
	I0717 02:04:05.909079   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:04:05.909286   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:04:05.909452   71929 kubeadm.go:394] duration metric: took 7m58.01930975s to StartCluster
	I0717 02:04:05.909455   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:04:05.909494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:04:05.909552   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:04:05.952911   71929 cri.go:89] found id: ""
	I0717 02:04:05.952937   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.952949   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:04:05.952957   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:04:05.953026   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:04:05.988490   71929 cri.go:89] found id: ""
	I0717 02:04:05.988518   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.988529   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:04:05.988537   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:04:05.988593   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:04:06.025228   71929 cri.go:89] found id: ""
	I0717 02:04:06.025259   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.025269   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:04:06.025277   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:04:06.025342   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:04:06.060563   71929 cri.go:89] found id: ""
	I0717 02:04:06.060589   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.060599   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:04:06.060604   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:04:06.060660   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:04:06.095051   71929 cri.go:89] found id: ""
	I0717 02:04:06.095079   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.095091   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:04:06.095099   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:04:06.095150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:04:06.131892   71929 cri.go:89] found id: ""
	I0717 02:04:06.131914   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.131921   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:04:06.131927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:04:06.131973   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:04:06.168893   71929 cri.go:89] found id: ""
	I0717 02:04:06.168919   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.168930   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:04:06.168937   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:04:06.168995   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:04:06.206635   71929 cri.go:89] found id: ""
	I0717 02:04:06.206658   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.206668   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:04:06.206679   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:04:06.206693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:04:06.308601   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:04:06.308624   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:04:06.308637   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:04:06.422081   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:04:06.422116   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:04:06.467466   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:04:06.467496   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:04:06.521420   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:04:06.521457   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0717 02:04:06.535167   71929 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 02:04:06.535211   71929 out.go:239] * 
	W0717 02:04:06.535263   71929 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.535292   71929 out.go:239] * 
	W0717 02:04:06.536098   71929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:04:06.539314   71929 out.go:177] 
	W0717 02:04:06.540504   71929 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.540557   71929 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 02:04:06.540579   71929 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 02:04:06.541888   71929 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.393189736Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181848393162317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a01985f7-5092-4176-b2a7-bb90055fba6d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.393832146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e9463b8-79c4-4e68-b887-6794f14aa553 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.393879334Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e9463b8-79c4-4e68-b887-6794f14aa553 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.393913776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3e9463b8-79c4-4e68-b887-6794f14aa553 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.431659387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=322620c0-1337-4d20-a16c-a63e33a1cd7e name=/runtime.v1.RuntimeService/Version
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.431808673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=322620c0-1337-4d20-a16c-a63e33a1cd7e name=/runtime.v1.RuntimeService/Version
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.434453598Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73b555bb-4140-4011-be3d-601097840148 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.434920445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181848434878497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73b555bb-4140-4011-be3d-601097840148 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.436470810Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddfcf6f1-fb1f-40ed-8b67-e7f9d30c0f51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.436525493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddfcf6f1-fb1f-40ed-8b67-e7f9d30c0f51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.436564336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ddfcf6f1-fb1f-40ed-8b67-e7f9d30c0f51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.470430518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8bcbaa8-bb4b-46fb-9721-ee3e9f45c80e name=/runtime.v1.RuntimeService/Version
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.470523620Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8bcbaa8-bb4b-46fb-9721-ee3e9f45c80e name=/runtime.v1.RuntimeService/Version
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.471763686Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d3f89f9-4542-4a2b-9da7-cf180ba78d8b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.472118777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181848472098487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d3f89f9-4542-4a2b-9da7-cf180ba78d8b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.472593470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f425099-2c52-4d75-b676-3442cc5d36e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.472641058Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f425099-2c52-4d75-b676-3442cc5d36e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.472673997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0f425099-2c52-4d75-b676-3442cc5d36e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.504471690Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d45f7399-3e3c-4d7c-8fed-a0b4ccc1a223 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.504544401Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d45f7399-3e3c-4d7c-8fed-a0b4ccc1a223 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.505795920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b6911ba-3ac3-4af1-b86f-65cbc60aa603 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.506144843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721181848506126278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b6911ba-3ac3-4af1-b86f-65cbc60aa603 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.506743080Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f4e4415-3d81-4765-9c37-7d2af543c5a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.506797257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f4e4415-3d81-4765-9c37-7d2af543c5a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:04:08 old-k8s-version-901761 crio[644]: time="2024-07-17 02:04:08.506830220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3f4e4415-3d81-4765-9c37-7d2af543c5a4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 01:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060095] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.053379] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.696762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.451232] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600989] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.496189] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.063928] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058024] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.198095] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.159661] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.276256] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[Jul17 01:56] systemd-fstab-generator[830]: Ignoring "noauto" option for root device
	[  +0.060021] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.876226] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[ +12.568707] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 02:00] systemd-fstab-generator[5018]: Ignoring "noauto" option for root device
	[Jul17 02:02] systemd-fstab-generator[5295]: Ignoring "noauto" option for root device
	[  +0.065589] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:04:08 up 8 min,  0 users,  load average: 0.01, 0.04, 0.01
	Linux old-k8s-version-901761 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000bfe240, 0x48ab5d6, 0x3, 0xc000bab9b0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000bfe240, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000bab9b0, 0x24, 0x0, ...)
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]: net.(*Dialer).DialContext(0xc00003bbc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bab9b0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000ad8a80, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bab9b0, 0x24, 0x60, 0x7f9ffa354288, 0x118, ...)
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]: net/http.(*Transport).dial(0xc000754140, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000bab9b0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]: net/http.(*Transport).dialConn(0xc000754140, 0x4f7fe00, 0xc000120018, 0x0, 0xc000beac00, 0x5, 0xc000bab9b0, 0x24, 0x0, 0xc0006d57a0, ...)
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]: net/http.(*Transport).dialConnFor(0xc000754140, 0xc0008ed1e0)
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]: created by net/http.(*Transport).queueForDial
	Jul 17 02:04:05 old-k8s-version-901761 kubelet[5477]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 17 02:04:06 old-k8s-version-901761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 17 02:04:06 old-k8s-version-901761 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 02:04:06 old-k8s-version-901761 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 02:04:06 old-k8s-version-901761 kubelet[5526]: I0717 02:04:06.338389    5526 server.go:416] Version: v1.20.0
	Jul 17 02:04:06 old-k8s-version-901761 kubelet[5526]: I0717 02:04:06.338650    5526 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 02:04:06 old-k8s-version-901761 kubelet[5526]: I0717 02:04:06.340522    5526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 02:04:06 old-k8s-version-901761 kubelet[5526]: W0717 02:04:06.341374    5526 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 02:04:06 old-k8s-version-901761 kubelet[5526]: I0717 02:04:06.341767    5526 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-901761 -n old-k8s-version-901761
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 2 (219.851653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-901761" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (742.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 02:00:03.264677   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 02:00:14.901336   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 02:00:17.179816   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 02:08:55.981810863 +0000 UTC m=+6435.029691971
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-738184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-738184 logs -n 25: (2.080625361s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-894370 sudo cat                              | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo find                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo crio                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-894370                                       | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255698 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | disable-driver-mounts-255698                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:48 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-940222            | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-738184  | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-391501             | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-940222                 | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-901761        | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 02:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-738184       | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-391501                  | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:59 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-391501 --memory=2200                     | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 02:02 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901761             | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:51:47
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:51:47.395737   71929 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:51:47.396000   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396010   71929 out.go:304] Setting ErrFile to fd 2...
	I0717 01:51:47.396016   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396184   71929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:51:47.396684   71929 out.go:298] Setting JSON to false
	I0717 01:51:47.397549   71929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5649,"bootTime":1721175458,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:51:47.397606   71929 start.go:139] virtualization: kvm guest
	I0717 01:51:47.399758   71929 out.go:177] * [old-k8s-version-901761] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:51:47.400960   71929 notify.go:220] Checking for updates...
	I0717 01:51:47.400966   71929 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:51:47.402266   71929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:51:47.403356   71929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:51:47.404532   71929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:51:47.405524   71929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:51:47.406572   71929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:51:47.407935   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:51:47.408358   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.408427   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.422931   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0717 01:51:47.423315   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.423809   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.423831   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.424123   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.424259   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.426227   71929 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 01:51:47.427500   71929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:51:47.427770   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.427801   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.442080   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I0717 01:51:47.442438   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.442901   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.442924   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.443208   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.443382   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.476327   71929 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:51:47.477607   71929 start.go:297] selected driver: kvm2
	I0717 01:51:47.477620   71929 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.477762   71929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:51:47.478432   71929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.478541   71929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:51:47.493611   71929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:51:47.493967   71929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:51:47.494039   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:51:47.494056   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:51:47.494147   71929 start.go:340] cluster config:
	{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.494271   71929 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.496056   71929 out.go:177] * Starting "old-k8s-version-901761" primary control-plane node in "old-k8s-version-901761" cluster
	I0717 01:51:45.178864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:47.497229   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:51:47.497266   71929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:51:47.497279   71929 cache.go:56] Caching tarball of preloaded images
	I0717 01:51:47.497368   71929 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:51:47.497379   71929 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:51:47.497484   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:51:47.497671   71929 start.go:360] acquireMachinesLock for old-k8s-version-901761: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:51:51.258826   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:54.330879   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:00.410811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:03.482811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:09.562828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:12.634828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:18.714910   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:21.786892   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:27.866863   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:30.938805   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:37.022827   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:40.090853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:46.170839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:49.242854   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:55.322824   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:58.394792   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:04.474811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:07.546855   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:13.626861   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:16.698832   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:22.778828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:25.850864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:31.930814   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:35.002842   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:41.082839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:44.154796   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:50.234823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:53.306914   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:59.386835   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:02.458751   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:08.538853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:11.610833   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:17.690816   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:20.762793   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:26.842837   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:29.914866   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:35.994838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:39.066806   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:45.146846   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:48.218841   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:54.298823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:57.370838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:55:00.375050   71522 start.go:364] duration metric: took 3m54.700923144s to acquireMachinesLock for "default-k8s-diff-port-738184"
	I0717 01:55:00.375103   71522 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:00.375110   71522 fix.go:54] fixHost starting: 
	I0717 01:55:00.375500   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:00.375532   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:00.390583   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0717 01:55:00.390957   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:00.391392   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:00.391412   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:00.391704   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:00.391927   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:00.392069   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:00.393467   71522 fix.go:112] recreateIfNeeded on default-k8s-diff-port-738184: state=Stopped err=<nil>
	I0717 01:55:00.393508   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	W0717 01:55:00.393658   71522 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:00.395826   71522 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-738184" ...
	I0717 01:55:00.397256   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Start
	I0717 01:55:00.397401   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring networks are active...
	I0717 01:55:00.398079   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network default is active
	I0717 01:55:00.398390   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network mk-default-k8s-diff-port-738184 is active
	I0717 01:55:00.398710   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Getting domain xml...
	I0717 01:55:00.399275   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Creating domain...
	I0717 01:55:00.372573   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:00.372621   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.372933   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:55:00.372957   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.373131   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:55:00.374934   71146 machine.go:97] duration metric: took 4m37.428393808s to provisionDockerMachine
	I0717 01:55:00.374969   71146 fix.go:56] duration metric: took 4m37.449104762s for fixHost
	I0717 01:55:00.374974   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 4m37.449121677s
	W0717 01:55:00.374996   71146 start.go:714] error starting host: provision: host is not running
	W0717 01:55:00.375080   71146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 01:55:00.375088   71146 start.go:729] Will try again in 5 seconds ...
	I0717 01:55:01.590292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting to get IP...
	I0717 01:55:01.591187   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591589   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591657   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.591578   72583 retry.go:31] will retry after 266.165899ms: waiting for machine to come up
	I0717 01:55:01.859307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859724   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859751   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.859695   72583 retry.go:31] will retry after 282.941451ms: waiting for machine to come up
	I0717 01:55:02.144389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144756   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144787   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.144701   72583 retry.go:31] will retry after 327.203414ms: waiting for machine to come up
	I0717 01:55:02.473217   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473681   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473705   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.473606   72583 retry.go:31] will retry after 553.917043ms: waiting for machine to come up
	I0717 01:55:03.029379   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029762   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.029738   72583 retry.go:31] will retry after 617.312209ms: waiting for machine to come up
	I0717 01:55:03.648372   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648701   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648733   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.648670   72583 retry.go:31] will retry after 641.28503ms: waiting for machine to come up
	I0717 01:55:04.291493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.291986   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.292019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:04.291870   72583 retry.go:31] will retry after 1.133455116s: waiting for machine to come up
	I0717 01:55:05.426672   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426943   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426972   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:05.426892   72583 retry.go:31] will retry after 1.00384113s: waiting for machine to come up
	I0717 01:55:05.376907   71146 start.go:360] acquireMachinesLock for embed-certs-940222: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:55:06.432146   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432502   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432525   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:06.432477   72583 retry.go:31] will retry after 1.472142907s: waiting for machine to come up
	I0717 01:55:07.906974   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907407   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907437   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:07.907336   72583 retry.go:31] will retry after 1.775986179s: waiting for machine to come up
	I0717 01:55:09.685396   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685792   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685822   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:09.685756   72583 retry.go:31] will retry after 2.663700716s: waiting for machine to come up
	I0717 01:55:12.351616   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.351985   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.352017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:12.351921   72583 retry.go:31] will retry after 2.409004894s: waiting for machine to come up
	I0717 01:55:14.763493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763859   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763876   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:14.763828   72583 retry.go:31] will retry after 3.049843419s: waiting for machine to come up
	I0717 01:55:19.031713   71603 start.go:364] duration metric: took 4m8.751453112s to acquireMachinesLock for "no-preload-391501"
	I0717 01:55:19.031779   71603 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:19.031787   71603 fix.go:54] fixHost starting: 
	I0717 01:55:19.032306   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:19.032352   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:19.049376   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0717 01:55:19.049877   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:19.050387   71603 main.go:141] libmachine: Using API Version  1
	I0717 01:55:19.050409   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:19.050752   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:19.050935   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:19.051104   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 01:55:19.052805   71603 fix.go:112] recreateIfNeeded on no-preload-391501: state=Stopped err=<nil>
	I0717 01:55:19.052832   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	W0717 01:55:19.052989   71603 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:19.056667   71603 out.go:177] * Restarting existing kvm2 VM for "no-preload-391501" ...
	I0717 01:55:19.058078   71603 main.go:141] libmachine: (no-preload-391501) Calling .Start
	I0717 01:55:19.058314   71603 main.go:141] libmachine: (no-preload-391501) Ensuring networks are active...
	I0717 01:55:19.059126   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network default is active
	I0717 01:55:19.059466   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network mk-no-preload-391501 is active
	I0717 01:55:19.059958   71603 main.go:141] libmachine: (no-preload-391501) Getting domain xml...
	I0717 01:55:19.060746   71603 main.go:141] libmachine: (no-preload-391501) Creating domain...
	I0717 01:55:17.816307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.816746   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Found IP for machine: 192.168.39.170
	I0717 01:55:17.816765   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserving static IP address...
	I0717 01:55:17.816776   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has current primary IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.817337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserved static IP address: 192.168.39.170
	I0717 01:55:17.817366   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for SSH to be available...
	I0717 01:55:17.817389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.817420   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | skip adding static IP to network mk-default-k8s-diff-port-738184 - found existing host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"}
	I0717 01:55:17.817443   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Getting to WaitForSSH function...
	I0717 01:55:17.819693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820022   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.820056   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820171   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH client type: external
	I0717 01:55:17.820203   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa (-rw-------)
	I0717 01:55:17.820245   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:17.820259   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | About to run SSH command:
	I0717 01:55:17.820280   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | exit 0
	I0717 01:55:17.942987   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:17.943370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetConfigRaw
	I0717 01:55:17.943945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:17.946638   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.946993   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.947021   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.947268   71522 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/config.json ...
	I0717 01:55:17.947479   71522 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:17.947497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:17.947732   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:17.950032   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.950397   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950489   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:17.950664   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950827   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950959   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:17.951108   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:17.951300   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:17.951311   71522 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:18.051147   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:18.051180   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051421   71522 buildroot.go:166] provisioning hostname "default-k8s-diff-port-738184"
	I0717 01:55:18.051456   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051655   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.054480   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055024   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.055053   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055262   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.055473   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055643   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.055928   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.056077   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.056089   71522 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-738184 && echo "default-k8s-diff-port-738184" | sudo tee /etc/hostname
	I0717 01:55:18.170268   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-738184
	
	I0717 01:55:18.170299   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.173037   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.173369   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173485   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.173673   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173851   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173957   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.174110   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.174322   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.174349   71522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-738184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-738184/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-738184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:18.279963   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:18.279997   71522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:18.280030   71522 buildroot.go:174] setting up certificates
	I0717 01:55:18.280042   71522 provision.go:84] configureAuth start
	I0717 01:55:18.280054   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.280393   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:18.282887   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283201   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.283231   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.285399   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.285691   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285795   71522 provision.go:143] copyHostCerts
	I0717 01:55:18.285865   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:18.285884   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:18.285971   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:18.286084   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:18.286095   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:18.286129   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:18.286205   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:18.286214   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:18.286247   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:18.286313   71522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-738184 san=[127.0.0.1 192.168.39.170 default-k8s-diff-port-738184 localhost minikube]
	I0717 01:55:18.386547   71522 provision.go:177] copyRemoteCerts
	I0717 01:55:18.386627   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:18.386658   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.388930   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.389322   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389465   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.389662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.389804   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.389944   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.469031   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:18.493607   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 01:55:18.517024   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:18.539757   71522 provision.go:87] duration metric: took 259.702663ms to configureAuth
	I0717 01:55:18.539793   71522 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:18.540064   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:18.540178   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.542831   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543174   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.543196   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543388   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.543599   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.543843   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.544011   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.544172   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.544343   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.544362   71522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:18.804633   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:18.804690   71522 machine.go:97] duration metric: took 857.197634ms to provisionDockerMachine
	I0717 01:55:18.804706   71522 start.go:293] postStartSetup for "default-k8s-diff-port-738184" (driver="kvm2")
	I0717 01:55:18.804720   71522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:18.804743   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:18.805049   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:18.805073   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.807835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808127   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.808147   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808319   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.808497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.808670   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.808823   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.889297   71522 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:18.893587   71522 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:18.893615   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:18.893694   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:18.893779   71522 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:18.893886   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:18.903319   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:18.927700   71522 start.go:296] duration metric: took 122.979492ms for postStartSetup
	I0717 01:55:18.927748   71522 fix.go:56] duration metric: took 18.552636525s for fixHost
	I0717 01:55:18.927775   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.930483   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.930768   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.930791   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.931004   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.931192   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931511   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.931677   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.931873   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.931887   71522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:19.031515   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181319.004563133
	
	I0717 01:55:19.031541   71522 fix.go:216] guest clock: 1721181319.004563133
	I0717 01:55:19.031552   71522 fix.go:229] Guest: 2024-07-17 01:55:19.004563133 +0000 UTC Remote: 2024-07-17 01:55:18.927754613 +0000 UTC m=+253.390645105 (delta=76.80852ms)
	I0717 01:55:19.031611   71522 fix.go:200] guest clock delta is within tolerance: 76.80852ms
	I0717 01:55:19.031623   71522 start.go:83] releasing machines lock for "default-k8s-diff-port-738184", held for 18.656540342s
	I0717 01:55:19.031661   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.031940   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:19.034537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.034881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.034911   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.035036   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035557   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035750   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035822   71522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:19.035875   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.036000   71522 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:19.036027   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.038509   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038860   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.038892   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038935   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038982   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039156   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039328   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.039361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.039389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.039488   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.039537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039702   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.040047   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.140208   71522 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:19.146454   71522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:19.293584   71522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:19.300750   71522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:19.300817   71522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:19.321596   71522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:19.321621   71522 start.go:495] detecting cgroup driver to use...
	I0717 01:55:19.321684   71522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:19.337664   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:19.351856   71522 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:19.351922   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:19.366355   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:19.380735   71522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:19.495916   71522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:19.646426   71522 docker.go:233] disabling docker service ...
	I0717 01:55:19.646501   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:19.665764   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:19.683893   71522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:19.814704   71522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:19.958389   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:19.973223   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:19.992869   71522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:55:19.992937   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.003696   71522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:20.003762   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.014415   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.025303   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.036715   71522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:20.047872   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.059666   71522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.079479   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.092424   71522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:20.103225   71522 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:20.103284   71522 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:20.120620   71522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:20.136439   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:20.284796   71522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:20.427605   71522 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:20.427698   71522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:20.433477   71522 start.go:563] Will wait 60s for crictl version
	I0717 01:55:20.433537   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:55:20.437399   71522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:20.479192   71522 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:20.479289   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.507655   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.537084   71522 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:55:20.538435   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:20.541200   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:20.541531   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541772   71522 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:20.546261   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:20.559802   71522 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:20.559946   71522 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:55:20.560001   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:20.381503   71603 main.go:141] libmachine: (no-preload-391501) Waiting to get IP...
	I0717 01:55:20.382632   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.383105   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.383210   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.383077   72724 retry.go:31] will retry after 193.198351ms: waiting for machine to come up
	I0717 01:55:20.577611   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.578117   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.578145   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.578067   72724 retry.go:31] will retry after 254.406992ms: waiting for machine to come up
	I0717 01:55:20.834633   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.835088   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.835116   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.835057   72724 retry.go:31] will retry after 459.446617ms: waiting for machine to come up
	I0717 01:55:21.295939   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.296384   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.296409   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.296343   72724 retry.go:31] will retry after 515.654185ms: waiting for machine to come up
	I0717 01:55:21.813613   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.814140   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.814178   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.814104   72724 retry.go:31] will retry after 652.322198ms: waiting for machine to come up
	I0717 01:55:22.468223   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:22.468858   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:22.468897   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:22.468774   72724 retry.go:31] will retry after 767.220835ms: waiting for machine to come up
	I0717 01:55:23.237341   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:23.237685   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:23.237716   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:23.237633   72724 retry.go:31] will retry after 1.083873631s: waiting for machine to come up
	I0717 01:55:24.323463   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:24.323983   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:24.324011   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:24.323934   72724 retry.go:31] will retry after 1.255667305s: waiting for machine to come up
	I0717 01:55:20.597329   71522 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:55:20.597409   71522 ssh_runner.go:195] Run: which lz4
	I0717 01:55:20.602100   71522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:55:20.606863   71522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:55:20.606900   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:55:22.053002   71522 crio.go:462] duration metric: took 1.450939378s to copy over tarball
	I0717 01:55:22.053071   71522 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:24.356349   71522 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.303245698s)
	I0717 01:55:24.356378   71522 crio.go:469] duration metric: took 2.303353381s to extract the tarball
	I0717 01:55:24.356385   71522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:55:24.402866   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:24.446681   71522 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:55:24.446709   71522 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:55:24.446720   71522 kubeadm.go:934] updating node { 192.168.39.170 8444 v1.30.2 crio true true} ...
	I0717 01:55:24.446844   71522 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-738184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:24.446931   71522 ssh_runner.go:195] Run: crio config
	I0717 01:55:24.499717   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:24.499744   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:24.499759   71522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:24.499787   71522 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-738184 NodeName:default-k8s-diff-port-738184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:24.499965   71522 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-738184"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:24.500039   71522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:55:24.510488   71522 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:24.510568   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:24.520830   71522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 01:55:24.538018   71522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:55:24.556287   71522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 01:55:24.574973   71522 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:24.579058   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:24.591752   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:24.712285   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:24.729387   71522 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184 for IP: 192.168.39.170
	I0717 01:55:24.729411   71522 certs.go:194] generating shared ca certs ...
	I0717 01:55:24.729432   71522 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:24.729596   71522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:24.729650   71522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:24.729662   71522 certs.go:256] generating profile certs ...
	I0717 01:55:24.729776   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/client.key
	I0717 01:55:24.729847   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key.44902a6f
	I0717 01:55:24.729907   71522 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key
	I0717 01:55:24.730044   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:24.730086   71522 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:24.730099   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:24.730135   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:24.730183   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:24.730222   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:24.730277   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:24.731142   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:24.762240   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:24.788746   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:24.825379   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:24.853821   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 01:55:24.887105   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:55:24.910834   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:24.934566   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:55:24.959709   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:24.983722   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:25.007312   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:25.031576   71522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:25.049348   71522 ssh_runner.go:195] Run: openssl version
	I0717 01:55:25.055410   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:25.066104   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070616   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070675   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.076604   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:25.087284   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:25.098383   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103262   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103331   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.109170   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:25.119940   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:25.130829   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135659   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135734   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.141583   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:25.152770   71522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:25.157395   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:25.163543   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:25.169580   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:25.175754   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:25.181771   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:25.187935   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:25.193614   71522 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:25.193727   71522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:25.193770   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.230871   71522 cri.go:89] found id: ""
	I0717 01:55:25.230954   71522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:25.241336   71522 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:25.241357   71522 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:25.241410   71522 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:25.251637   71522 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:25.253030   71522 kubeconfig.go:125] found "default-k8s-diff-port-738184" server: "https://192.168.39.170:8444"
	I0717 01:55:25.255926   71522 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:25.265878   71522 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.170
	I0717 01:55:25.265915   71522 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:25.265927   71522 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:25.265982   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.305929   71522 cri.go:89] found id: ""
	I0717 01:55:25.306015   71522 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:25.322581   71522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:25.332334   71522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:25.332356   71522 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:25.332407   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:55:25.342132   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:25.342193   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:25.351628   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:55:25.360765   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:25.360833   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:25.370167   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.379057   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:25.379124   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.389470   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:55:25.399142   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:25.399210   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:25.409452   71522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:25.421509   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.545698   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.580838   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:25.581295   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:25.581322   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:25.581247   72724 retry.go:31] will retry after 1.354947672s: waiting for machine to come up
	I0717 01:55:26.937260   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:26.937746   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:26.937774   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:26.937696   72724 retry.go:31] will retry after 1.818074273s: waiting for machine to come up
	I0717 01:55:28.758015   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:28.758489   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:28.758517   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:28.758449   72724 retry.go:31] will retry after 2.782465023s: waiting for machine to come up
	I0717 01:55:26.599380   71522 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053644988s)
	I0717 01:55:26.599416   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.807765   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.878767   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.965940   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:26.966023   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.466587   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.966138   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.983649   71522 api_server.go:72] duration metric: took 1.017709312s to wait for apiserver process to appear ...
	I0717 01:55:27.983678   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:55:27.983701   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:27.984214   71522 api_server.go:269] stopped: https://192.168.39.170:8444/healthz: Get "https://192.168.39.170:8444/healthz": dial tcp 192.168.39.170:8444: connect: connection refused
	I0717 01:55:28.483780   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.862416   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.862464   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.862479   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.869667   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.869718   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.983899   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.988670   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:30.988704   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.484233   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.488939   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:31.488978   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.984611   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.988738   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:55:31.996182   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:55:31.996207   71522 api_server.go:131] duration metric: took 4.012523131s to wait for apiserver health ...
	I0717 01:55:31.996216   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:31.996222   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:31.998122   71522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:55:31.999536   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:55:32.010501   71522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:55:32.030227   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:55:32.039923   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:55:32.039954   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039988   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039998   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:55:32.040003   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:55:32.040013   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:55:32.040020   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:55:32.040033   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:55:32.040041   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:55:32.040046   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:55:32.040053   71522 system_pods.go:74] duration metric: took 9.802793ms to wait for pod list to return data ...
	I0717 01:55:32.040060   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:55:32.043233   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:55:32.043259   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:55:32.043270   71522 node_conditions.go:105] duration metric: took 3.202451ms to run NodePressure ...
	I0717 01:55:32.043285   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:32.350948   71522 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356119   71522 kubeadm.go:739] kubelet initialised
	I0717 01:55:32.356143   71522 kubeadm.go:740] duration metric: took 5.164025ms waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356153   71522 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:32.361501   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.366747   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366770   71522 pod_ready.go:81] duration metric: took 5.246954ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.366778   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366785   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.371049   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371066   71522 pod_ready.go:81] duration metric: took 4.275157ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.371073   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371078   71522 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.375338   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375361   71522 pod_ready.go:81] duration metric: took 4.27092ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.375369   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375379   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.434545   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434583   71522 pod_ready.go:81] duration metric: took 59.196717ms for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.434593   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434601   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.836139   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836178   71522 pod_ready.go:81] duration metric: took 401.568097ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.836194   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836212   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.234032   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234060   71522 pod_ready.go:81] duration metric: took 397.83937ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.234071   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234076   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.633953   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633981   71522 pod_ready.go:81] duration metric: took 399.893316ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.633992   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633998   71522 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:34.034511   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034560   71522 pod_ready.go:81] duration metric: took 400.544281ms for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:34.034574   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034583   71522 pod_ready.go:38] duration metric: took 1.678420144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:34.034599   71522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:55:34.049235   71522 ops.go:34] apiserver oom_adj: -16
	I0717 01:55:34.049261   71522 kubeadm.go:597] duration metric: took 8.807897214s to restartPrimaryControlPlane
	I0717 01:55:34.049272   71522 kubeadm.go:394] duration metric: took 8.855664434s to StartCluster
	I0717 01:55:34.049292   71522 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.049374   71522 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:55:34.050992   71522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.051239   71522 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:55:34.051307   71522 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:55:34.051409   71522 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051454   71522 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051465   71522 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:55:34.051497   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051511   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:34.051498   71522 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051502   71522 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051564   71522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-738184"
	I0717 01:55:34.051587   71522 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051612   71522 addons.go:243] addon metrics-server should already be in state true
	I0717 01:55:34.051686   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051803   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.051845   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052097   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052151   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052331   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052383   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.054788   71522 out.go:177] * Verifying Kubernetes components...
	I0717 01:55:34.056293   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0717 01:55:34.067821   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.067911   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068370   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068390   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068515   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068526   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0717 01:55:34.068535   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068709   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.068991   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068997   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.069278   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069320   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069529   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069560   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069611   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.069629   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.069977   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.070184   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.074013   71522 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.074036   71522 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:55:34.074062   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.074422   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.074463   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.085256   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0717 01:55:34.085694   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0717 01:55:34.085716   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086207   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086378   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086402   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.086785   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.086945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.086947   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086999   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.087327   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.087624   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.088695   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.089320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.090932   71522 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:55:34.090932   71522 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:31.543587   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:31.544073   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:31.544102   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:31.544012   72724 retry.go:31] will retry after 2.898539616s: waiting for machine to come up
	I0717 01:55:34.444315   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:34.444828   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:34.444870   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:34.444790   72724 retry.go:31] will retry after 4.252719028s: waiting for machine to come up
	I0717 01:55:34.092892   71522 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.092910   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:55:34.092926   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.092985   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:55:34.092993   71522 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:55:34.093003   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.095340   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0717 01:55:34.095840   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.096397   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.096434   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.096567   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.096819   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.096979   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097029   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097058   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097498   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.097536   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.097881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097897   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097899   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097923   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.098075   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098105   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098286   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098449   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.098461   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.113190   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0717 01:55:34.113544   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.114033   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.114059   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.114375   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.114575   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.116332   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.116544   71522 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.116563   71522 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:55:34.116583   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.119693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.119992   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.120017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.120457   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.120722   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.120965   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.121652   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.247964   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:34.266521   71522 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:34.370296   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:55:34.370318   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:55:34.380102   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.394620   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:55:34.394639   71522 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:55:34.409328   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.416653   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:34.416684   71522 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:55:34.445296   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:35.605781   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196419762s)
	I0717 01:55:35.605843   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605858   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605854   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160520147s)
	I0717 01:55:35.605778   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.225640358s)
	I0717 01:55:35.605929   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605944   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605988   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606007   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606293   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606300   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606309   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606315   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606319   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606329   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606333   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606349   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606357   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606371   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606398   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606410   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606424   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606640   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607811   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607852   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607866   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607874   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607892   71522 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-738184"
	I0717 01:55:35.607815   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607878   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607829   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607959   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607842   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.613691   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.613717   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.614019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.614025   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.614081   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.615871   71522 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0717 01:55:38.700025   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.700533   71603 main.go:141] libmachine: (no-preload-391501) Found IP for machine: 192.168.61.174
	I0717 01:55:38.700555   71603 main.go:141] libmachine: (no-preload-391501) Reserving static IP address...
	I0717 01:55:38.700572   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has current primary IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.701013   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.701033   71603 main.go:141] libmachine: (no-preload-391501) Reserved static IP address: 192.168.61.174
	I0717 01:55:38.701049   71603 main.go:141] libmachine: (no-preload-391501) DBG | skip adding static IP to network mk-no-preload-391501 - found existing host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"}
	I0717 01:55:38.701064   71603 main.go:141] libmachine: (no-preload-391501) DBG | Getting to WaitForSSH function...
	I0717 01:55:38.701080   71603 main.go:141] libmachine: (no-preload-391501) Waiting for SSH to be available...
	I0717 01:55:38.703218   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703577   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.703605   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703755   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH client type: external
	I0717 01:55:38.703773   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa (-rw-------)
	I0717 01:55:38.703791   71603 main.go:141] libmachine: (no-preload-391501) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:38.703809   71603 main.go:141] libmachine: (no-preload-391501) DBG | About to run SSH command:
	I0717 01:55:38.703817   71603 main.go:141] libmachine: (no-preload-391501) DBG | exit 0
	I0717 01:55:38.827046   71603 main.go:141] libmachine: (no-preload-391501) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:38.827413   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetConfigRaw
	I0717 01:55:38.828102   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:38.831229   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.831782   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.831814   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.832140   71603 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/config.json ...
	I0717 01:55:38.832347   71603 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:38.832367   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:38.832574   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.835302   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835710   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.835735   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835954   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.836173   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836345   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836521   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.836691   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.836928   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.836947   71603 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:38.943173   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:38.943213   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943491   71603 buildroot.go:166] provisioning hostname "no-preload-391501"
	I0717 01:55:38.943513   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943725   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.946396   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946872   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.946900   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946980   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.947164   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947339   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947518   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.947695   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.947849   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.947869   71603 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-391501 && echo "no-preload-391501" | sudo tee /etc/hostname
	I0717 01:55:39.070382   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-391501
	
	I0717 01:55:39.070429   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.073539   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.073904   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.073941   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.074203   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.074426   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074624   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.075132   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.075348   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.075373   71603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-391501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-391501/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-391501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:39.195604   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:39.195634   71603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:39.195649   71603 buildroot.go:174] setting up certificates
	I0717 01:55:39.195656   71603 provision.go:84] configureAuth start
	I0717 01:55:39.195665   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:39.195952   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:39.198409   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198792   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.198822   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198996   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.201509   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.201870   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.201901   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.202078   71603 provision.go:143] copyHostCerts
	I0717 01:55:39.202153   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:39.202166   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:39.202221   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:39.202313   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:39.202320   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:39.202339   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:39.202387   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:39.202394   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:39.202410   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:39.202456   71603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.no-preload-391501 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-391501]
	I0717 01:55:39.550166   71603 provision.go:177] copyRemoteCerts
	I0717 01:55:39.550224   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:39.550249   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.552616   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.552990   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.553020   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.553135   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.553298   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.553460   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.553559   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:39.638467   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:39.664166   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:55:39.689416   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:55:39.714130   71603 provision.go:87] duration metric: took 518.463378ms to configureAuth
	I0717 01:55:39.714159   71603 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:39.714362   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:55:39.714440   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.717269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717694   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.717722   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.718080   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718240   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718393   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.718621   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.718793   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.718809   71603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:39.982066   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:39.982095   71603 machine.go:97] duration metric: took 1.149734372s to provisionDockerMachine
	I0717 01:55:39.982110   71603 start.go:293] postStartSetup for "no-preload-391501" (driver="kvm2")
	I0717 01:55:39.982127   71603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:39.982147   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:39.982429   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:39.982445   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.984935   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985232   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.985269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985372   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.985553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.985793   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.986010   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.074439   71603 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:40.079515   71603 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:40.079541   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:40.079617   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:40.079708   71603 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:40.079831   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:40.090783   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:40.121212   71603 start.go:296] duration metric: took 139.087761ms for postStartSetup
	I0717 01:55:40.121257   71603 fix.go:56] duration metric: took 21.089468917s for fixHost
	I0717 01:55:40.121281   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.124208   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124517   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.124545   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124753   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.124940   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125119   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125269   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.125430   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:40.125626   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:40.125638   71603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:40.239538   71929 start.go:364] duration metric: took 3m52.741834986s to acquireMachinesLock for "old-k8s-version-901761"
	I0717 01:55:40.239610   71929 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:40.239618   71929 fix.go:54] fixHost starting: 
	I0717 01:55:40.240021   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:40.240054   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:40.257464   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0717 01:55:40.257866   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:40.258287   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:55:40.258311   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:40.258672   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:40.258871   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:40.259041   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetState
	I0717 01:55:40.260529   71929 fix.go:112] recreateIfNeeded on old-k8s-version-901761: state=Stopped err=<nil>
	I0717 01:55:40.260568   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	W0717 01:55:40.260721   71929 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:40.262590   71929 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-901761" ...
	I0717 01:55:35.617123   71522 addons.go:510] duration metric: took 1.565817066s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0717 01:55:36.270109   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:38.270489   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.270966   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.239384   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181340.205508074
	
	I0717 01:55:40.239409   71603 fix.go:216] guest clock: 1721181340.205508074
	I0717 01:55:40.239419   71603 fix.go:229] Guest: 2024-07-17 01:55:40.205508074 +0000 UTC Remote: 2024-07-17 01:55:40.121261572 +0000 UTC m=+269.976034747 (delta=84.246502ms)
	I0717 01:55:40.239445   71603 fix.go:200] guest clock delta is within tolerance: 84.246502ms
	I0717 01:55:40.239453   71603 start.go:83] releasing machines lock for "no-preload-391501", held for 21.207695176s
	I0717 01:55:40.239486   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.239768   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:40.242534   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.242923   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.242956   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.243159   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243649   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243826   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243924   71603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:40.243975   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.244045   71603 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:40.244071   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.246599   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.246958   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.246984   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247089   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247153   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247254   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.247401   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.247486   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.247510   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247579   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.247669   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247861   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.248031   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.248169   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.328497   71603 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:40.350092   71603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:40.497644   71603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:40.504094   71603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:40.504164   71603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:40.526752   71603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:40.526777   71603 start.go:495] detecting cgroup driver to use...
	I0717 01:55:40.526842   71603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:40.543537   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:40.557551   71603 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:40.557606   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:40.571755   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:40.585548   71603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:40.702991   71603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:40.849192   71603 docker.go:233] disabling docker service ...
	I0717 01:55:40.849276   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:40.864697   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:40.877940   71603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:41.043588   71603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:41.175359   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:41.191170   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:41.212440   71603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:55:41.212508   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.224335   71603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:41.224411   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.235721   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.247575   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.260018   71603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:41.271526   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.285999   71603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.307653   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.319272   71603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:41.330544   71603 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:41.330637   71603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:41.346698   71603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:41.361983   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:41.490052   71603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:41.639509   71603 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:41.639626   71603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:41.646714   71603 start.go:563] Will wait 60s for crictl version
	I0717 01:55:41.646793   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.650900   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:41.688112   71603 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:41.688188   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.717335   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.750767   71603 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:55:40.263857   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .Start
	I0717 01:55:40.264019   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring networks are active...
	I0717 01:55:40.264709   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network default is active
	I0717 01:55:40.265165   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network mk-old-k8s-version-901761 is active
	I0717 01:55:40.265581   71929 main.go:141] libmachine: (old-k8s-version-901761) Getting domain xml...
	I0717 01:55:40.266340   71929 main.go:141] libmachine: (old-k8s-version-901761) Creating domain...
	I0717 01:55:41.562582   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting to get IP...
	I0717 01:55:41.563329   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.563802   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.563890   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.563781   72905 retry.go:31] will retry after 216.264296ms: waiting for machine to come up
	I0717 01:55:41.781168   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.781662   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.781690   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.781629   72905 retry.go:31] will retry after 275.269814ms: waiting for machine to come up
	I0717 01:55:42.058127   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.058525   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.058564   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.058498   72905 retry.go:31] will retry after 348.024497ms: waiting for machine to come up
	I0717 01:55:41.752123   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:41.755114   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755571   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:41.755602   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755863   71603 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:41.760869   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:41.775414   71603 kubeadm.go:883] updating cluster {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:41.775563   71603 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:55:41.775609   71603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:41.815115   71603 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 01:55:41.815141   71603 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.815241   71603 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.815279   71603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.815290   71603 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.815304   71603 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:55:41.815239   71603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.815258   71603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817894   71603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.817939   71603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817892   71603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.817888   71603 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:55:41.818033   71603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.817891   71603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.817900   71603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.817978   71603 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.014545   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 01:55:42.030064   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.034517   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.123584   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.130122   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.134935   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.136170   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.173650   71603 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 01:55:42.173707   71603 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.173718   71603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 01:55:42.173755   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.173767   71603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.173820   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.219689   71603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 01:55:42.219745   71603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.219792   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.240802   71603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 01:55:42.240847   71603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.240907   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.251152   71603 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 01:55:42.251189   71603 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.251225   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254790   71603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 01:55:42.254849   71603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.254886   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.254895   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254916   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.254951   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.255006   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.257984   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.267440   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.395407   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395471   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395513   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395522   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395558   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395582   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:55:42.395592   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395663   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:42.397740   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:55:42.397813   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:42.420577   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420602   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420619   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420640   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420662   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420676   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420705   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 01:55:42.420711   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420738   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 01:55:43.737662   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581683   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.160996964s)
	I0717 01:55:44.581730   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 01:55:44.581753   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581754   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.161058602s)
	I0717 01:55:44.581788   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 01:55:44.581810   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581858   71603 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 01:55:44.581900   71603 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581928   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.270830   71522 node_ready.go:49] node "default-k8s-diff-port-738184" has status "Ready":"True"
	I0717 01:55:41.270853   71522 node_ready.go:38] duration metric: took 7.004304151s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:41.270868   71522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:41.278587   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285210   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.285236   71522 pod_ready.go:81] duration metric: took 6.623347ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285250   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291110   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.291133   71522 pod_ready.go:81] duration metric: took 5.874809ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291145   71522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297614   71522 pod_ready.go:92] pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.297636   71522 pod_ready.go:81] duration metric: took 6.483783ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297645   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305307   71522 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.305335   71522 pod_ready.go:81] duration metric: took 1.007681338s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305349   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472190   71522 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.472222   71522 pod_ready.go:81] duration metric: took 166.864153ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472236   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871756   71522 pod_ready.go:92] pod "kube-proxy-c4n94" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.871780   71522 pod_ready.go:81] duration metric: took 399.536375ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871789   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272858   71522 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:43.272895   71522 pod_ready.go:81] duration metric: took 401.098971ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272913   71522 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:45.281019   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:42.407813   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.408311   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.408346   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.408218   72905 retry.go:31] will retry after 388.717436ms: waiting for machine to come up
	I0717 01:55:42.798810   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.799378   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.799411   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.799323   72905 retry.go:31] will retry after 661.391346ms: waiting for machine to come up
	I0717 01:55:43.462189   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:43.462654   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:43.462686   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:43.462603   72905 retry.go:31] will retry after 636.142497ms: waiting for machine to come up
	I0717 01:55:44.100416   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.100852   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.100874   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.100808   72905 retry.go:31] will retry after 781.652918ms: waiting for machine to come up
	I0717 01:55:44.883650   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.884137   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.884170   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.884088   72905 retry.go:31] will retry after 1.238608293s: waiting for machine to come up
	I0717 01:55:46.124419   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:46.124911   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:46.124942   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:46.124854   72905 retry.go:31] will retry after 1.169011508s: waiting for machine to come up
	I0717 01:55:47.295202   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:47.295679   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:47.295715   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:47.295632   72905 retry.go:31] will retry after 1.723987128s: waiting for machine to come up
	I0717 01:55:47.004929   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.423090292s)
	I0717 01:55:47.004968   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 01:55:47.004990   71603 ssh_runner.go:235] Completed: which crictl: (2.423045276s)
	I0717 01:55:47.005021   71603 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:47.005053   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:47.005067   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:49.097703   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.092610651s)
	I0717 01:55:49.097747   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 01:55:49.097776   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097836   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097776   71603 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.092700925s)
	I0717 01:55:49.097953   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 01:55:49.098050   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:47.781233   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.786039   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.020883   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:49.021363   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:49.021396   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:49.021279   72905 retry.go:31] will retry after 2.098481296s: waiting for machine to come up
	I0717 01:55:51.121693   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:51.122253   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:51.122282   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:51.122192   72905 retry.go:31] will retry after 2.624839429s: waiting for machine to come up
	I0717 01:55:50.560197   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.462322087s)
	I0717 01:55:50.560292   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 01:55:50.560323   71603 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560252   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.462175943s)
	I0717 01:55:50.560373   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560388   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 01:55:53.630471   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.070071936s)
	I0717 01:55:53.630509   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 01:55:53.630529   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:53.630604   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:52.280585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:54.779606   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:53.748796   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:53.749348   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:53.749390   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:53.749298   72905 retry.go:31] will retry after 3.47930356s: waiting for machine to come up
	I0717 01:55:57.231901   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232407   71929 main.go:141] libmachine: (old-k8s-version-901761) Found IP for machine: 192.168.50.44
	I0717 01:55:57.232437   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has current primary IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232449   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserving static IP address...
	I0717 01:55:57.232880   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserved static IP address: 192.168.50.44
	I0717 01:55:57.232928   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.232937   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting for SSH to be available...
	I0717 01:55:57.232952   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | skip adding static IP to network mk-old-k8s-version-901761 - found existing host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"}
	I0717 01:55:57.232971   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Getting to WaitForSSH function...
	I0717 01:55:57.235007   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235208   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.235242   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235421   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH client type: external
	I0717 01:55:57.235461   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa (-rw-------)
	I0717 01:55:57.235502   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:57.235516   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | About to run SSH command:
	I0717 01:55:57.235530   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | exit 0
	I0717 01:55:57.362619   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:57.363106   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetConfigRaw
	I0717 01:55:57.363760   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.366213   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366636   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.366666   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366958   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:55:57.367165   71929 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:57.367188   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:57.367392   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.370017   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370354   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.370371   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370577   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.370765   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.370935   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.371084   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.371325   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.371506   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.371518   71929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:58.531714   71146 start.go:364] duration metric: took 53.154741813s to acquireMachinesLock for "embed-certs-940222"
	I0717 01:55:58.531773   71146 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:58.531784   71146 fix.go:54] fixHost starting: 
	I0717 01:55:58.532189   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:58.532237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:58.549026   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0717 01:55:58.549491   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:58.550001   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:55:58.550025   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:58.550363   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:58.550536   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:55:58.550707   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:55:58.552236   71146 fix.go:112] recreateIfNeeded on embed-certs-940222: state=Stopped err=<nil>
	I0717 01:55:58.552259   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	W0717 01:55:58.552397   71146 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:58.554487   71146 out.go:177] * Restarting existing kvm2 VM for "embed-certs-940222" ...
	I0717 01:55:57.478893   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:57.478921   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479123   71929 buildroot.go:166] provisioning hostname "old-k8s-version-901761"
	I0717 01:55:57.479142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479330   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.482163   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482531   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.482579   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482739   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.482937   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483111   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483264   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.483454   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.483632   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.483648   71929 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-901761 && echo "old-k8s-version-901761" | sudo tee /etc/hostname
	I0717 01:55:57.613409   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-901761
	
	I0717 01:55:57.613440   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.616228   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616614   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.616655   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616860   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.617040   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617222   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617383   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.617574   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.617778   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.617794   71929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-901761' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-901761/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-901761' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:57.737648   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:57.737683   71929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:57.737703   71929 buildroot.go:174] setting up certificates
	I0717 01:55:57.737711   71929 provision.go:84] configureAuth start
	I0717 01:55:57.737721   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.738028   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.741089   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741532   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.741556   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741741   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.744444   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.744917   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.744947   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.745111   71929 provision.go:143] copyHostCerts
	I0717 01:55:57.745185   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:57.745202   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:57.745273   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:57.745393   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:57.745405   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:57.745437   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:57.745517   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:57.745527   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:57.745545   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:57.745602   71929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-901761 san=[127.0.0.1 192.168.50.44 localhost minikube old-k8s-version-901761]
	I0717 01:55:57.830872   71929 provision.go:177] copyRemoteCerts
	I0717 01:55:57.830939   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:57.830972   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.833463   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833741   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.833777   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833887   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.834083   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.834250   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.834403   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:57.918346   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:57.954250   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:57.979770   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:55:58.005161   71929 provision.go:87] duration metric: took 267.436975ms to configureAuth
	I0717 01:55:58.005193   71929 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:58.005412   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:55:58.005493   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.008255   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008626   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.008663   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008833   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.009006   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009170   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009298   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.009464   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.009616   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.009639   71929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:58.281081   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:58.281112   71929 machine.go:97] duration metric: took 913.933405ms to provisionDockerMachine
	I0717 01:55:58.281121   71929 start.go:293] postStartSetup for "old-k8s-version-901761" (driver="kvm2")
	I0717 01:55:58.281130   71929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:58.281144   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.281497   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:58.281533   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.284465   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.284812   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.284840   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.285023   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.285207   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.285441   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.285650   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.377149   71929 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:58.381709   71929 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:58.381731   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:58.381798   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:58.381887   71929 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:58.381972   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:58.392916   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:58.420677   71929 start.go:296] duration metric: took 139.542186ms for postStartSetup
	I0717 01:55:58.420721   71929 fix.go:56] duration metric: took 18.181102939s for fixHost
	I0717 01:55:58.420745   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.423582   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.423961   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.423989   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.424169   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.424372   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424557   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424693   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.424859   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.425040   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.425053   71929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:58.531563   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181358.508735025
	
	I0717 01:55:58.531585   71929 fix.go:216] guest clock: 1721181358.508735025
	I0717 01:55:58.531594   71929 fix.go:229] Guest: 2024-07-17 01:55:58.508735025 +0000 UTC Remote: 2024-07-17 01:55:58.420726806 +0000 UTC m=+251.057483904 (delta=88.008219ms)
	I0717 01:55:58.531617   71929 fix.go:200] guest clock delta is within tolerance: 88.008219ms
	I0717 01:55:58.531624   71929 start.go:83] releasing machines lock for "old-k8s-version-901761", held for 18.292040224s
	I0717 01:55:58.531655   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.531981   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:58.534476   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.534967   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.534996   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.535258   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535802   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535990   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.536105   71929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:58.536183   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.536244   71929 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:58.536275   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.539139   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539401   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539534   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539560   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539768   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.539815   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539845   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539968   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540000   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.540116   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540243   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.540332   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540468   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.628291   71929 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:58.656964   71929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:58.806516   71929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:58.815051   71929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:58.815113   71929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:58.838575   71929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:58.838596   71929 start.go:495] detecting cgroup driver to use...
	I0717 01:55:58.838662   71929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:58.855728   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:58.875221   71929 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:58.875285   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:58.889781   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:58.903832   71929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:59.026815   71929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:59.173879   71929 docker.go:233] disabling docker service ...
	I0717 01:55:59.173964   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:59.192906   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:59.208262   71929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:59.368178   71929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:59.500335   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:59.514795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:59.535553   71929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:55:59.535631   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.548304   71929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:59.548376   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.563066   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.578452   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.593447   71929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:59.606239   71929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:59.617051   71929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:59.617118   71929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:59.632601   71929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:59.645034   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:59.812343   71929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:59.969366   71929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:59.969444   71929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:59.974286   71929 start.go:563] Will wait 60s for crictl version
	I0717 01:55:59.974335   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:55:59.978280   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:00.020399   71929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:00.020489   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.049811   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.081952   71929 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:55:55.703286   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.07265838s)
	I0717 01:55:55.703312   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 01:55:55.703342   71603 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:55.703396   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:56.651520   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 01:55:56.651563   71603 cache_images.go:123] Successfully loaded all cached images
	I0717 01:55:56.651569   71603 cache_images.go:92] duration metric: took 14.83641531s to LoadCachedImages
	I0717 01:55:56.651581   71603 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:55:56.651702   71603 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-391501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:56.651770   71603 ssh_runner.go:195] Run: crio config
	I0717 01:55:56.700129   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:55:56.700152   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:56.700162   71603 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:56.700189   71603 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-391501 NodeName:no-preload-391501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:56.700315   71603 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-391501"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:56.700372   71603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:55:56.711859   71603 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:56.711936   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:56.721994   71603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 01:55:56.738335   71603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:55:56.755198   71603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 01:55:56.772467   71603 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:56.777580   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:56.792767   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:56.913075   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:56.930746   71603 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501 for IP: 192.168.61.174
	I0717 01:55:56.930768   71603 certs.go:194] generating shared ca certs ...
	I0717 01:55:56.930783   71603 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:56.930929   71603 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:56.930968   71603 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:56.930978   71603 certs.go:256] generating profile certs ...
	I0717 01:55:56.931050   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/client.key
	I0717 01:55:56.931112   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key.a30174c9
	I0717 01:55:56.931153   71603 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key
	I0717 01:55:56.931292   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:56.931331   71603 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:56.931344   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:56.931373   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:56.931404   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:56.931434   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:56.931478   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:56.932180   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:56.971111   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:57.016791   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:57.049766   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:57.078139   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:55:57.109781   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:55:57.137912   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:57.165141   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:55:57.190210   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:57.214366   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:57.239518   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:57.265505   71603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:57.283773   71603 ssh_runner.go:195] Run: openssl version
	I0717 01:55:57.289846   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:57.300434   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305370   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305456   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.311765   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:57.322769   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:57.334122   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338774   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338823   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.344721   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:57.356476   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:57.368672   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374055   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374107   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.380256   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:57.392428   71603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:57.397593   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:57.404378   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:57.411094   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:57.418536   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:57.425312   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:57.431841   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:57.438615   71603 kubeadm.go:392] StartCluster: {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:57.438696   71603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:57.438782   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.482932   71603 cri.go:89] found id: ""
	I0717 01:55:57.482993   71603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:57.493813   71603 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:57.493832   71603 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:57.493872   71603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:57.504757   71603 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:57.505655   71603 kubeconfig.go:125] found "no-preload-391501" server: "https://192.168.61.174:8443"
	I0717 01:55:57.507634   71603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:57.517990   71603 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I0717 01:55:57.518025   71603 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:57.518038   71603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:57.518090   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.557504   71603 cri.go:89] found id: ""
	I0717 01:55:57.557588   71603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:57.574074   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:57.583703   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:57.583724   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:57.583768   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:55:57.593924   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:57.593992   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:57.606945   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:55:57.616803   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:57.616847   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:57.627215   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.637121   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:57.637179   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.646291   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:55:57.655314   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:57.655372   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:57.666994   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:57.677582   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:57.798148   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.316598   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.518419797s)
	I0717 01:55:59.316629   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.581666   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.675003   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.748682   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:59.748771   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:56.781465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:59.280394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:00.083384   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:56:00.086085   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086454   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:56:00.086494   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086710   71929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:00.091322   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:00.104102   71929 kubeadm.go:883] updating cluster {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:00.104237   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:56:00.104309   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:00.152445   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:00.152537   71929 ssh_runner.go:195] Run: which lz4
	I0717 01:56:00.156760   71929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:00.161123   71929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:00.161149   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:56:02.031804   71929 crio.go:462] duration metric: took 1.875087246s to copy over tarball
	I0717 01:56:02.031904   71929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:58.556014   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Start
	I0717 01:55:58.556171   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring networks are active...
	I0717 01:55:58.556866   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network default is active
	I0717 01:55:58.557237   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network mk-embed-certs-940222 is active
	I0717 01:55:58.557686   71146 main.go:141] libmachine: (embed-certs-940222) Getting domain xml...
	I0717 01:55:58.558375   71146 main.go:141] libmachine: (embed-certs-940222) Creating domain...
	I0717 01:55:59.917419   71146 main.go:141] libmachine: (embed-certs-940222) Waiting to get IP...
	I0717 01:55:59.918379   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:55:59.918849   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:55:59.918908   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:55:59.918833   73097 retry.go:31] will retry after 248.560075ms: waiting for machine to come up
	I0717 01:56:00.169337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.169877   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.169898   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.169837   73097 retry.go:31] will retry after 380.159418ms: waiting for machine to come up
	I0717 01:56:00.551472   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.552033   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.552076   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.551987   73097 retry.go:31] will retry after 439.990107ms: waiting for machine to come up
	I0717 01:56:00.993776   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.994337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.994351   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.994319   73097 retry.go:31] will retry after 415.462036ms: waiting for machine to come up
	I0717 01:56:01.412114   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:01.412508   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:01.412535   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:01.412484   73097 retry.go:31] will retry after 660.852153ms: waiting for machine to come up
	I0717 01:56:02.075095   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.075519   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.075541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.075498   73097 retry.go:31] will retry after 788.200532ms: waiting for machine to come up
	I0717 01:56:00.249300   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.749610   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.823943   71603 api_server.go:72] duration metric: took 1.075254107s to wait for apiserver process to appear ...
	I0717 01:56:00.823980   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:00.824006   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:00.825286   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:01.325032   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:01.281044   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:03.281329   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:05.092637   71929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060698331s)
	I0717 01:56:05.092674   71929 crio.go:469] duration metric: took 3.060839356s to extract the tarball
	I0717 01:56:05.092682   71929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:05.135461   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:05.170789   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:05.170814   71929 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:56:05.170853   71929 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.170884   71929 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.170908   71929 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.170961   71929 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:56:05.171077   71929 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.171126   71929 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.171138   71929 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.171462   71929 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172182   71929 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:56:05.172224   71929 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.172296   71929 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.172362   71929 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172415   71929 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.172449   71929 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.372794   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415131   71929 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:56:05.415181   71929 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415231   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.419179   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.446530   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:56:05.452583   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:56:05.485692   71929 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:56:05.485734   71929 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:56:05.485780   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.486154   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.487346   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.489408   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.490486   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:56:05.494929   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.499420   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.593505   71929 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:56:05.593587   71929 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.593638   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.632564   71929 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:56:05.632615   71929 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.632667   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657745   71929 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:56:05.657792   71929 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.657852   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657863   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:56:05.657908   71929 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:56:05.657943   71929 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.657958   71929 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:56:05.657976   71929 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.657980   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658004   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658037   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.658077   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.671679   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.671708   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.736572   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:56:05.736599   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:56:05.736671   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.758178   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:56:05.758210   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:56:05.787948   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:56:06.882199   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:07.025117   71929 cache_images.go:92] duration metric: took 1.854284265s to LoadCachedImages
	W0717 01:56:07.025227   71929 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0717 01:56:07.025245   71929 kubeadm.go:934] updating node { 192.168.50.44 8443 v1.20.0 crio true true} ...
	I0717 01:56:07.025378   71929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-901761 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:07.025465   71929 ssh_runner.go:195] Run: crio config
	I0717 01:56:07.081517   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:56:07.081543   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:07.081560   71929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:07.081584   71929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.44 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901761 NodeName:old-k8s-version-901761 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:56:07.081749   71929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-901761"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:07.081833   71929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:56:07.092233   71929 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:07.092335   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:07.102086   71929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0717 01:56:07.121538   71929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:07.139112   71929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0717 01:56:07.157397   71929 ssh_runner.go:195] Run: grep 192.168.50.44	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:07.161818   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.44	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:07.174723   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:07.307484   71929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:07.325948   71929 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761 for IP: 192.168.50.44
	I0717 01:56:07.325974   71929 certs.go:194] generating shared ca certs ...
	I0717 01:56:07.326002   71929 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.326164   71929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:07.326216   71929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:07.326229   71929 certs.go:256] generating profile certs ...
	I0717 01:56:07.326351   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.key
	I0717 01:56:07.326416   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5
	I0717 01:56:07.326461   71929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key
	I0717 01:56:07.326630   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:07.326668   71929 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:07.326681   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:07.326700   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:07.326724   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:07.326767   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:07.326828   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:07.327702   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:07.377671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:02.864980   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.865620   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.865656   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.865503   73097 retry.go:31] will retry after 1.00461953s: waiting for machine to come up
	I0717 01:56:03.871702   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:03.872187   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:03.872215   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:03.872133   73097 retry.go:31] will retry after 1.15731846s: waiting for machine to come up
	I0717 01:56:05.030767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:05.031263   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:05.031285   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:05.031209   73097 retry.go:31] will retry after 1.704165162s: waiting for machine to come up
	I0717 01:56:06.737975   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:06.738337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:06.738386   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:06.738307   73097 retry.go:31] will retry after 2.014062128s: waiting for machine to come up
	I0717 01:56:06.326066   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:06.326112   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:05.780615   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:08.281127   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:07.413171   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:07.443671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:07.482883   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:56:07.527280   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:07.571200   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:07.612296   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:07.638012   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:07.662018   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:07.688033   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:07.721827   71929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:07.741517   71929 ssh_runner.go:195] Run: openssl version
	I0717 01:56:07.747466   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:07.758615   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763382   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763439   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.769358   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:07.781802   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:07.792763   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797629   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797681   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.803879   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:07.815479   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:07.828292   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832769   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832829   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.838958   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:07.850108   71929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:07.854758   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:07.860661   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:07.866484   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:07.872302   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:07.878252   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:07.884275   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:07.890148   71929 kubeadm.go:392] StartCluster: {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:07.890264   71929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:07.890343   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:07.930081   71929 cri.go:89] found id: ""
	I0717 01:56:07.930153   71929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:07.941371   71929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:07.941396   71929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:07.941445   71929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:07.955229   71929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:07.957263   71929 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:07.959002   71929 kubeconfig.go:62] /home/jenkins/minikube-integration/19264-3908/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-901761" cluster setting kubeconfig missing "old-k8s-version-901761" context setting]
	I0717 01:56:07.960384   71929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.962748   71929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:07.973815   71929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.44
	I0717 01:56:07.973851   71929 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:07.973864   71929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:07.973933   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:08.020169   71929 cri.go:89] found id: ""
	I0717 01:56:08.020247   71929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:08.038015   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:08.049272   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:08.049294   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:08.049336   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:08.058953   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:08.059025   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:08.069034   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:08.078748   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:08.078817   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:08.089660   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.099521   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:08.099583   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.109831   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:08.120340   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:08.120400   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:08.130884   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:08.141008   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:08.275189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.006841   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.255401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.376659   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.475840   71929 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:09.475937   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:09.976926   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.476192   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.976705   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.476386   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.976459   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:08.753835   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:08.754316   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:08.754347   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:08.754264   73097 retry.go:31] will retry after 2.005810517s: waiting for machine to come up
	I0717 01:56:10.761600   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:10.762022   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:10.762053   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:10.761980   73097 retry.go:31] will retry after 2.631438855s: waiting for machine to come up
	I0717 01:56:11.327297   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:11.327348   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:10.779534   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:13.278417   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:15.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:12.476819   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:12.976633   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.476076   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.476885   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.976972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.476823   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.976917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.476765   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.976609   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.395592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:13.395949   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:13.395991   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:13.395905   73097 retry.go:31] will retry after 3.565162998s: waiting for machine to come up
	I0717 01:56:16.964948   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965424   71146 main.go:141] libmachine: (embed-certs-940222) Found IP for machine: 192.168.72.225
	I0717 01:56:16.965455   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has current primary IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965465   71146 main.go:141] libmachine: (embed-certs-940222) Reserving static IP address...
	I0717 01:56:16.966065   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.966092   71146 main.go:141] libmachine: (embed-certs-940222) DBG | skip adding static IP to network mk-embed-certs-940222 - found existing host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"}
	I0717 01:56:16.966107   71146 main.go:141] libmachine: (embed-certs-940222) Reserved static IP address: 192.168.72.225
	I0717 01:56:16.966122   71146 main.go:141] libmachine: (embed-certs-940222) Waiting for SSH to be available...
	I0717 01:56:16.966150   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Getting to WaitForSSH function...
	I0717 01:56:16.968287   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968642   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.968688   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968758   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH client type: external
	I0717 01:56:16.968782   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa (-rw-------)
	I0717 01:56:16.968842   71146 main.go:141] libmachine: (embed-certs-940222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:56:16.968872   71146 main.go:141] libmachine: (embed-certs-940222) DBG | About to run SSH command:
	I0717 01:56:16.968888   71146 main.go:141] libmachine: (embed-certs-940222) DBG | exit 0
	I0717 01:56:17.090641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | SSH cmd err, output: <nil>: 
	I0717 01:56:17.091120   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetConfigRaw
	I0717 01:56:17.091720   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.094205   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.094592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094810   71146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/config.json ...
	I0717 01:56:17.095001   71146 machine.go:94] provisionDockerMachine start ...
	I0717 01:56:17.095022   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:17.095223   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.097395   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097680   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.097707   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097848   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.098021   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098170   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098311   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.098491   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.098683   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.098695   71146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:56:17.203054   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:56:17.203080   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203364   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:56:17.203402   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.206404   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.206826   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.206868   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.207076   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.207282   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207471   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207611   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.207793   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.207985   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.207997   71146 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-940222 && echo "embed-certs-940222" | sudo tee /etc/hostname
	I0717 01:56:17.326485   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-940222
	
	I0717 01:56:17.326512   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.329226   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329629   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.329659   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329834   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.329996   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330148   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330265   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.330417   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.330619   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.330642   71146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-940222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-940222/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-940222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:56:17.439258   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:56:17.439285   71146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:56:17.439315   71146 buildroot.go:174] setting up certificates
	I0717 01:56:17.439324   71146 provision.go:84] configureAuth start
	I0717 01:56:17.439332   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.439656   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.442348   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442765   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.442796   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442976   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.445418   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.445767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.445803   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.446000   71146 provision.go:143] copyHostCerts
	I0717 01:56:17.446081   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:56:17.446098   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:56:17.446171   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:56:17.446265   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:56:17.446272   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:56:17.446292   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:56:17.446346   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:56:17.446353   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:56:17.446370   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:56:17.446418   71146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.embed-certs-940222 san=[127.0.0.1 192.168.72.225 embed-certs-940222 localhost minikube]
	I0717 01:56:17.578140   71146 provision.go:177] copyRemoteCerts
	I0717 01:56:17.578195   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:56:17.578221   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.581141   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581432   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.581457   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581697   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.581892   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.582038   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.582219   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:17.664867   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:56:17.691053   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:56:17.715816   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:56:17.742153   71146 provision.go:87] duration metric: took 302.817653ms to configureAuth
	I0717 01:56:17.742180   71146 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:56:17.742405   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:17.742486   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.745102   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745369   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.745398   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745608   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.745820   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746019   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746209   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.746510   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.746738   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.746761   71146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:56:18.017395   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:56:18.017420   71146 machine.go:97] duration metric: took 922.405002ms to provisionDockerMachine
	I0717 01:56:18.017433   71146 start.go:293] postStartSetup for "embed-certs-940222" (driver="kvm2")
	I0717 01:56:18.017449   71146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:56:18.017469   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.017817   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:56:18.017846   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.020599   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021051   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.021081   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021228   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.021410   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.021556   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.021660   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.101432   71146 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:56:18.105722   71146 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:56:18.105742   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:56:18.105797   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:56:18.105866   71146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:56:18.105944   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:56:18.115228   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:18.139857   71146 start.go:296] duration metric: took 122.411322ms for postStartSetup
	I0717 01:56:18.139924   71146 fix.go:56] duration metric: took 19.608111597s for fixHost
	I0717 01:56:18.139951   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.142466   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.142865   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.142886   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.143098   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.143262   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143444   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143662   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.143852   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:18.144022   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:18.144033   71146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:56:18.243604   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181378.218663213
	
	I0717 01:56:18.243635   71146 fix.go:216] guest clock: 1721181378.218663213
	I0717 01:56:18.243644   71146 fix.go:229] Guest: 2024-07-17 01:56:18.218663213 +0000 UTC Remote: 2024-07-17 01:56:18.139933424 +0000 UTC m=+355.354069584 (delta=78.729789ms)
	I0717 01:56:18.243662   71146 fix.go:200] guest clock delta is within tolerance: 78.729789ms
	I0717 01:56:18.243667   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 19.711916707s
	I0717 01:56:18.243684   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.243952   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:18.246454   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.246881   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.246907   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.247135   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247618   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247828   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247919   71146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:56:18.247958   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.248050   71146 ssh_runner.go:195] Run: cat /version.json
	I0717 01:56:18.248074   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.250520   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250914   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.250952   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250973   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251222   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251403   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251463   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.251495   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.251668   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251747   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.251817   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251975   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.252103   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.351600   71146 ssh_runner.go:195] Run: systemctl --version
	I0717 01:56:18.357586   71146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:56:18.503767   71146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:56:18.511637   71146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:56:18.511724   71146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:56:18.530209   71146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:56:18.530235   71146 start.go:495] detecting cgroup driver to use...
	I0717 01:56:18.530303   71146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:56:18.551740   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:56:18.566975   71146 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:56:18.567044   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:56:18.585100   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:56:18.601151   71146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:56:18.735644   71146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:56:18.895436   71146 docker.go:233] disabling docker service ...
	I0717 01:56:18.895505   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:56:18.910354   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:56:18.922999   71146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:56:19.065365   71146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:56:19.179337   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:56:19.194454   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:56:19.213281   71146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:56:19.213339   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.223531   71146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:56:19.223594   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.233691   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.243695   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.255192   71146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:56:19.266082   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.276861   71146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.295903   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.306114   71146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:56:19.316226   71146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:56:19.316275   71146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:56:19.329402   71146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:56:19.340622   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:19.456624   71146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:56:19.605945   71146 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:56:19.606051   71146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:56:19.611067   71146 start.go:563] Will wait 60s for crictl version
	I0717 01:56:19.611116   71146 ssh_runner.go:195] Run: which crictl
	I0717 01:56:19.615065   71146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:19.662925   71146 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:19.662989   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.693240   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.722332   71146 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:56:16.328318   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:16.328371   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:17.780821   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:19.780921   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:17.476562   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:17.976663   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.476958   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.976722   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.476641   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.976079   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.476899   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.976553   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.476087   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.976659   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.723930   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:19.726730   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727084   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:19.727107   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727314   71146 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:19.731814   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:19.745514   71146 kubeadm.go:883] updating cluster {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:19.745622   71146 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:56:19.745677   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:19.782922   71146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:56:19.782988   71146 ssh_runner.go:195] Run: which lz4
	I0717 01:56:19.786946   71146 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:19.791298   71146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:19.791323   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:56:21.230910   71146 crio.go:462] duration metric: took 1.443984707s to copy over tarball
	I0717 01:56:21.231003   71146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:56:21.328607   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:21.328654   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.345118   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": read tcp 192.168.61.1:36190->192.168.61.174:8443: read: connection reset by peer
	I0717 01:56:21.824753   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.825500   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:22.325079   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:22.280465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:24.779729   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:22.475994   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:22.976928   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.476906   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.975980   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.476208   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.976090   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.476425   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.976072   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.976180   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.517174   71146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.286133857s)
	I0717 01:56:23.517200   71146 crio.go:469] duration metric: took 2.286263798s to extract the tarball
	I0717 01:56:23.517210   71146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:23.554084   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:23.603831   71146 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:56:23.603861   71146 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:56:23.603871   71146 kubeadm.go:934] updating node { 192.168.72.225 8443 v1.30.2 crio true true} ...
	I0717 01:56:23.604004   71146 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-940222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:23.604087   71146 ssh_runner.go:195] Run: crio config
	I0717 01:56:23.658775   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:23.658794   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:23.658803   71146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:23.658826   71146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.225 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-940222 NodeName:embed-certs-940222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:56:23.659007   71146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-940222"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:23.659092   71146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:56:23.669971   71146 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:23.670042   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:23.680949   71146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 01:56:23.698917   71146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:23.716218   71146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 01:56:23.733971   71146 ssh_runner.go:195] Run: grep 192.168.72.225	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:23.738112   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:23.750915   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:23.894690   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:23.913418   71146 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222 for IP: 192.168.72.225
	I0717 01:56:23.913440   71146 certs.go:194] generating shared ca certs ...
	I0717 01:56:23.913456   71146 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:23.913630   71146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:23.913703   71146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:23.913729   71146 certs.go:256] generating profile certs ...
	I0717 01:56:23.913856   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/client.key
	I0717 01:56:23.913926   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key.d13a776d
	I0717 01:56:23.913968   71146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key
	I0717 01:56:23.914081   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:23.914123   71146 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:23.914134   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:23.914161   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:23.914188   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:23.914214   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:23.914256   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:23.914925   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:23.961346   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:24.006765   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:24.036852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:24.064984   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 01:56:24.090778   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:24.116146   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:24.142429   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:24.168427   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:24.193691   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:24.218852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:24.242932   71146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:24.261434   71146 ssh_runner.go:195] Run: openssl version
	I0717 01:56:24.267358   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:24.280319   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285286   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285358   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.291896   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:24.304027   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:24.315542   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320212   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320283   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.326123   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:24.339982   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:24.352301   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357023   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357078   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.363112   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:24.375910   71146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:24.380986   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:24.387276   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:24.393718   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:24.400367   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:24.406600   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:24.413161   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:24.420455   71146 kubeadm.go:392] StartCluster: {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:24.420578   71146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:24.420643   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.460702   71146 cri.go:89] found id: ""
	I0717 01:56:24.460792   71146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:24.472047   71146 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:24.472064   71146 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:24.472105   71146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:24.483092   71146 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:24.484146   71146 kubeconfig.go:125] found "embed-certs-940222" server: "https://192.168.72.225:8443"
	I0717 01:56:24.486112   71146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:24.497462   71146 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.225
	I0717 01:56:24.497496   71146 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:24.497511   71146 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:24.497571   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.541423   71146 cri.go:89] found id: ""
	I0717 01:56:24.541486   71146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:24.563272   71146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:24.574859   71146 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:24.574883   71146 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:24.574930   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:24.584960   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:24.585022   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:24.595950   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:24.605686   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:24.605775   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:24.616191   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.625954   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:24.626009   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.636254   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:24.648853   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:24.648961   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:24.660491   71146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:24.675329   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:24.795437   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:25.895383   71146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099913319s)
	I0717 01:56:25.895411   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.116274   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.286149   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.355208   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:26.355296   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.855578   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.355880   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.371616   71146 api_server.go:72] duration metric: took 1.016410291s to wait for apiserver process to appear ...
	I0717 01:56:27.371642   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:27.371671   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:27.325875   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:27.325920   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:26.780264   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.279376   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.836783   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.836811   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.836823   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.883657   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.883684   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.883695   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.895244   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.895270   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:30.371799   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.375903   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.375926   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:30.872627   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.876799   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.876830   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:31.372402   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:31.376723   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 01:56:31.382638   71146 api_server.go:141] control plane version: v1.30.2
	I0717 01:56:31.382663   71146 api_server.go:131] duration metric: took 4.011014381s to wait for apiserver health ...
	I0717 01:56:31.382672   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:31.382679   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:31.384436   71146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:27.476313   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.976700   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.476585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.976008   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.477040   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.976892   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.476912   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.976626   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.476786   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.976148   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.385974   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:31.396977   71146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:31.415740   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:31.425268   71146 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:31.425306   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:31.425313   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:56:31.425320   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:56:31.425328   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:56:31.425332   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 01:56:31.425337   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:56:31.425344   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:31.425350   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 01:56:31.425360   71146 system_pods.go:74] duration metric: took 9.598959ms to wait for pod list to return data ...
	I0717 01:56:31.425368   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:31.429053   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:31.429075   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:31.429084   71146 node_conditions.go:105] duration metric: took 3.710466ms to run NodePressure ...
	I0717 01:56:31.429098   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:31.699456   71146 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703803   71146 kubeadm.go:739] kubelet initialised
	I0717 01:56:31.703825   71146 kubeadm.go:740] duration metric: took 4.345324ms waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703835   71146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:31.708962   71146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.712850   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712871   71146 pod_ready.go:81] duration metric: took 3.888169ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.712879   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712891   71146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.717134   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717156   71146 pod_ready.go:81] duration metric: took 4.256764ms for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.717163   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717169   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.721479   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721498   71146 pod_ready.go:81] duration metric: took 4.321032ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.721508   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721515   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.819188   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819217   71146 pod_ready.go:81] duration metric: took 97.692306ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.819226   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819231   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.219730   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219766   71146 pod_ready.go:81] duration metric: took 400.526796ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.219775   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219782   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.619930   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619961   71146 pod_ready.go:81] duration metric: took 400.172543ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.619971   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619978   71146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:33.019223   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019252   71146 pod_ready.go:81] duration metric: took 399.266573ms for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:33.019263   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019271   71146 pod_ready.go:38] duration metric: took 1.315427432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:33.019291   71146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:56:33.032094   71146 ops.go:34] apiserver oom_adj: -16
	I0717 01:56:33.032116   71146 kubeadm.go:597] duration metric: took 8.56004698s to restartPrimaryControlPlane
	I0717 01:56:33.032125   71146 kubeadm.go:394] duration metric: took 8.611681052s to StartCluster
	I0717 01:56:33.032140   71146 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.032204   71146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:33.033963   71146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.034198   71146 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:56:33.034337   71146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:56:33.034405   71146 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-940222"
	I0717 01:56:33.034425   71146 addons.go:69] Setting metrics-server=true in profile "embed-certs-940222"
	I0717 01:56:33.034467   71146 addons.go:234] Setting addon metrics-server=true in "embed-certs-940222"
	W0717 01:56:33.034481   71146 addons.go:243] addon metrics-server should already be in state true
	I0717 01:56:33.034516   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034465   71146 addons.go:69] Setting default-storageclass=true in profile "embed-certs-940222"
	I0717 01:56:33.034469   71146 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-940222"
	I0717 01:56:33.034589   71146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-940222"
	W0717 01:56:33.034632   71146 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:56:33.034411   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:33.034725   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034963   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.034992   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035052   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035093   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035199   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.036051   71146 out.go:177] * Verifying Kubernetes components...
	I0717 01:56:33.037606   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:33.051343   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I0717 01:56:33.051970   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.052483   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.052516   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.052671   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0717 01:56:33.052887   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.053016   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.053397   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.053443   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.053760   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.053775   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0717 01:56:33.053779   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054125   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.054139   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.054336   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.054625   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.054656   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054984   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.055524   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.055563   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.057648   71146 addons.go:234] Setting addon default-storageclass=true in "embed-certs-940222"
	W0717 01:56:33.057668   71146 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:56:33.057699   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.058003   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.058036   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.070476   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0717 01:56:33.070717   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0717 01:56:33.071094   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071289   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071648   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071665   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.071841   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071863   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.072171   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072293   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072357   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.072581   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.073298   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0717 01:56:33.073745   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.074224   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.074237   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.074585   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.074690   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.075032   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.075054   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.075361   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.077495   71146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:56:33.077496   71146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:33.079446   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:56:33.079460   71146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:56:33.079480   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.080373   71146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.080386   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:56:33.080401   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.083272   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083527   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083623   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.083641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083899   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084099   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084168   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.084184   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.084273   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.084331   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084463   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.084748   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084890   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.085028   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.092382   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0717 01:56:33.092826   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.093401   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.093418   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.094409   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.094576   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.096442   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.096730   71146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.096750   71146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:56:33.096768   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.099802   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100290   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.100368   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100472   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.100625   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.100760   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.100849   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.229494   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:33.246459   71146 node_ready.go:35] waiting up to 6m0s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:33.400804   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:56:33.400824   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:56:33.411866   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.413220   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.426485   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:56:33.426506   71146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:56:33.476707   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:33.476729   71146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:56:33.539095   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:34.542027   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130125192s)
	I0717 01:56:34.542089   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542102   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542103   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.128853338s)
	I0717 01:56:34.542139   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542151   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542420   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542442   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542442   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542447   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542450   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542468   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542474   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542483   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542505   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542517   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542711   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542727   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542715   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542835   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542847   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.549135   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.549160   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.549405   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.549428   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616065   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076933862s)
	I0717 01:56:34.616127   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616142   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616429   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616479   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616489   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.616499   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616541   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616784   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616800   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616810   71146 addons.go:475] Verifying addon metrics-server=true in "embed-certs-940222"
	I0717 01:56:34.619698   71146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:56:32.326261   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:32.326310   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:31.779064   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:33.780671   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:32.475986   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:32.976812   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.476601   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.976667   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.476897   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.976610   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.476444   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.976859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.476092   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.976979   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.620987   71146 addons.go:510] duration metric: took 1.586659462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:56:35.250360   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.251933   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.326685   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:37.326726   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:39.977828   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:39.977860   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:39.977877   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.002499   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:40.002532   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:36.280516   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:38.779351   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:40.324290   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.329888   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.329914   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:40.824413   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.831375   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.831407   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:41.324677   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:41.333259   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 01:56:41.341378   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:56:41.341426   71603 api_server.go:131] duration metric: took 40.517438405s to wait for apiserver health ...
	I0717 01:56:41.341438   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:56:41.341447   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:41.343489   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:37.476813   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:37.976779   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.476554   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.976791   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.476946   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.976044   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.476526   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.976315   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.476688   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.976203   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.750483   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:40.249907   71146 node_ready.go:49] node "embed-certs-940222" has status "Ready":"True"
	I0717 01:56:40.249934   71146 node_ready.go:38] duration metric: took 7.003442258s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:40.249945   71146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:40.255811   71146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762773   71146 pod_ready.go:92] pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:40.762795   71146 pod_ready.go:81] duration metric: took 506.956885ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762806   71146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:42.768945   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:41.344846   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:41.360339   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:41.385845   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:41.409812   71603 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:41.409843   71603 system_pods.go:61] "coredns-5cfdc65f69-ztqz8" [7c9caec8-56b6-4faa-9410-0528f108696c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:41.409849   71603 system_pods.go:61] "etcd-no-preload-391501" [603f01a1-2b07-4d1d-be14-4da4a9f1e1b2] Running
	I0717 01:56:41.409854   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [7733c5b6-5e30-472b-920d-3849f2849f7b] Running
	I0717 01:56:41.409860   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [c1afab7e-9b46-4940-94ec-e62ebc10f406] Running
	I0717 01:56:41.409865   71603 system_pods.go:61] "kube-proxy-zbqhw" [26056c12-35cd-4a3e-b40a-1eca055bd1e2] Running
	I0717 01:56:41.409869   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [98f81994-9d2a-45b8-9719-90e181ee5d6f] Running
	I0717 01:56:41.409877   71603 system_pods.go:61] "metrics-server-78fcd8795b-g9x96" [86a6a2c3-ae04-486d-9751-0cc801f9fbfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:41.409887   71603 system_pods.go:61] "storage-provisioner" [8b938905-d8e1-4129-8426-5e31a05d38db] Running
	I0717 01:56:41.409895   71603 system_pods.go:74] duration metric: took 24.018074ms to wait for pod list to return data ...
	I0717 01:56:41.409906   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:41.418825   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:41.418856   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:41.418868   71603 node_conditions.go:105] duration metric: took 8.953821ms to run NodePressure ...
	I0717 01:56:41.418892   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:41.713730   71603 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:41.719162   71603 retry.go:31] will retry after 180.435127ms: kubelet not initialised
	I0717 01:56:41.906299   71603 retry.go:31] will retry after 320.946038ms: kubelet not initialised
	I0717 01:56:42.232875   71603 retry.go:31] will retry after 423.072333ms: kubelet not initialised
	I0717 01:56:42.661412   71603 retry.go:31] will retry after 1.138026932s: kubelet not initialised
	I0717 01:56:43.809525   71603 retry.go:31] will retry after 1.187704503s: kubelet not initialised
	I0717 01:56:45.009815   71603 kubeadm.go:739] kubelet initialised
	I0717 01:56:45.009839   71603 kubeadm.go:740] duration metric: took 3.296082732s waiting for restarted kubelet to initialise ...
	I0717 01:56:45.009850   71603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:45.021149   71603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.780159   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:43.279699   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:45.280407   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:42.476301   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:42.976939   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.477021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.976910   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.476766   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.976415   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.476987   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.976666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.476735   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.976643   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.770078   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.269496   71146 pod_ready.go:92] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.269524   71146 pod_ready.go:81] duration metric: took 6.506711113s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.269538   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277267   71146 pod_ready.go:92] pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.277294   71146 pod_ready.go:81] duration metric: took 7.747271ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277309   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286697   71146 pod_ready.go:92] pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.286715   71146 pod_ready.go:81] duration metric: took 9.397698ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286723   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291876   71146 pod_ready.go:92] pod "kube-proxy-l58xk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.291897   71146 pod_ready.go:81] duration metric: took 5.168432ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291905   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296201   71146 pod_ready.go:92] pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.296215   71146 pod_ready.go:81] duration metric: took 4.304055ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296222   71146 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.027495   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:49.028127   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.779497   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:50.279065   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.476576   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:47.976502   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.476634   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.976299   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.476069   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.976086   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.476859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.976441   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.476217   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.976585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.303729   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.802778   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.029194   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:53.528363   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.778915   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:54.780173   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.476652   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:52.976136   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.976168   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.476176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.976049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.476464   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.976802   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.308491   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:56.802797   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:55.528547   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.533612   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.030406   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.278908   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:59.279393   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.476661   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:57.976021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.976940   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.476773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.976397   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.476591   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.976189   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.476917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.976263   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.806045   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.807112   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.529203   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.028677   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:01.779903   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:03.780163   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.476048   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:02.976019   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.476604   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.976602   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.477004   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.976726   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.476934   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.975985   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.476331   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.976185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.302031   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.303601   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.803763   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.528021   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:09.528499   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.780204   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:08.279630   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.476887   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:07.975972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.476034   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.976678   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:09.476927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:09.477010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:09.513328   71929 cri.go:89] found id: ""
	I0717 01:57:09.513352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.513361   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:09.513368   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:09.513418   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:09.551203   71929 cri.go:89] found id: ""
	I0717 01:57:09.551228   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.551237   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:09.551244   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:09.551308   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:09.585321   71929 cri.go:89] found id: ""
	I0717 01:57:09.585352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.585363   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:09.585370   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:09.585427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:09.623977   71929 cri.go:89] found id: ""
	I0717 01:57:09.624004   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.624012   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:09.624019   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:09.624078   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:09.663338   71929 cri.go:89] found id: ""
	I0717 01:57:09.663367   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.663374   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:09.663380   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:09.663425   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:09.696381   71929 cri.go:89] found id: ""
	I0717 01:57:09.696412   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.696423   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:09.696436   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:09.696482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:09.735892   71929 cri.go:89] found id: ""
	I0717 01:57:09.735922   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.735932   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:09.735944   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:09.736006   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:09.775878   71929 cri.go:89] found id: ""
	I0717 01:57:09.775909   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.775919   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:09.775929   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:09.775942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:09.830021   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:09.830057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:09.844753   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:09.844783   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:09.985140   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:09.985165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:09.985179   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:10.049946   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:10.049984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:10.310038   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.805565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:11.529122   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:14.028939   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:10.779935   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:13.278388   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:15.280027   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.592959   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:12.608385   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:12.608467   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:12.649900   71929 cri.go:89] found id: ""
	I0717 01:57:12.649931   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.649942   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:12.649950   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:12.650021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:12.684915   71929 cri.go:89] found id: ""
	I0717 01:57:12.684941   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.684948   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:12.684956   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:12.685010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:12.727718   71929 cri.go:89] found id: ""
	I0717 01:57:12.727758   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.727766   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:12.727788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:12.727864   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:12.767212   71929 cri.go:89] found id: ""
	I0717 01:57:12.767236   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.767244   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:12.767249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:12.767295   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:12.806301   71929 cri.go:89] found id: ""
	I0717 01:57:12.806320   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.806327   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:12.806332   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:12.806405   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:12.843118   71929 cri.go:89] found id: ""
	I0717 01:57:12.843151   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.843162   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:12.843170   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:12.843245   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:12.876671   71929 cri.go:89] found id: ""
	I0717 01:57:12.876697   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.876707   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:12.876714   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:12.876790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:12.916201   71929 cri.go:89] found id: ""
	I0717 01:57:12.916226   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.916232   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:12.916240   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:12.916250   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:12.970346   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:12.970385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:12.985029   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:12.985053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:13.068314   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:13.068340   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:13.068352   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:13.147862   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:13.147897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:15.703130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:15.717081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:15.717160   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:15.757513   71929 cri.go:89] found id: ""
	I0717 01:57:15.757538   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.757545   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:15.757552   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:15.757599   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:15.794185   71929 cri.go:89] found id: ""
	I0717 01:57:15.794218   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.794231   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:15.794238   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:15.794300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:15.830589   71929 cri.go:89] found id: ""
	I0717 01:57:15.830619   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.830628   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:15.830634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:15.830694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:15.869673   71929 cri.go:89] found id: ""
	I0717 01:57:15.869702   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.869713   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:15.869720   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:15.869782   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:15.909225   71929 cri.go:89] found id: ""
	I0717 01:57:15.909257   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.909267   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:15.909278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:15.909343   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:15.944389   71929 cri.go:89] found id: ""
	I0717 01:57:15.944417   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.944424   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:15.944430   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:15.944490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:15.982871   71929 cri.go:89] found id: ""
	I0717 01:57:15.982898   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.982907   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:15.982915   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:15.982983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:16.025674   71929 cri.go:89] found id: ""
	I0717 01:57:16.025701   71929 logs.go:276] 0 containers: []
	W0717 01:57:16.025711   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:16.025721   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:16.025736   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:16.111608   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:16.111627   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:16.111638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:16.184650   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:16.184689   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:16.230647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:16.230693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:16.286675   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:16.286710   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:15.303141   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.304891   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:16.029794   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.529463   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.780034   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:20.279882   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.802487   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:18.817483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:18.817562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:18.861623   71929 cri.go:89] found id: ""
	I0717 01:57:18.861653   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.861664   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:18.861671   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:18.861733   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:18.901335   71929 cri.go:89] found id: ""
	I0717 01:57:18.901359   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.901367   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:18.901372   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:18.901427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:18.936477   71929 cri.go:89] found id: ""
	I0717 01:57:18.936508   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.936518   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:18.936524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:18.936581   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:18.971056   71929 cri.go:89] found id: ""
	I0717 01:57:18.971087   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.971098   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:18.971106   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:18.971157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:19.005399   71929 cri.go:89] found id: ""
	I0717 01:57:19.005431   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.005453   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:19.005460   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:19.005525   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:19.040218   71929 cri.go:89] found id: ""
	I0717 01:57:19.040242   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.040250   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:19.040257   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:19.040317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:19.073365   71929 cri.go:89] found id: ""
	I0717 01:57:19.073392   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.073402   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:19.073409   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:19.073471   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:19.108670   71929 cri.go:89] found id: ""
	I0717 01:57:19.108701   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.108713   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:19.108725   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:19.108743   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:19.186077   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:19.186111   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.232181   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:19.232214   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:19.288713   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:19.288755   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:19.303089   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:19.303115   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:19.386372   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:21.886666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:21.900905   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:21.900966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:21.934955   71929 cri.go:89] found id: ""
	I0717 01:57:21.934979   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.934987   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:21.934993   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:21.935036   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:21.972180   71929 cri.go:89] found id: ""
	I0717 01:57:21.972203   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.972211   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:21.972217   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:21.972271   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:22.010452   71929 cri.go:89] found id: ""
	I0717 01:57:22.010479   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.010487   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:22.010493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:22.010547   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:22.045824   71929 cri.go:89] found id: ""
	I0717 01:57:22.045888   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.045902   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:22.045911   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:22.045984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:22.084734   71929 cri.go:89] found id: ""
	I0717 01:57:22.084760   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.084769   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:22.084774   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:22.084842   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:22.119808   71929 cri.go:89] found id: ""
	I0717 01:57:22.119838   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.119846   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:22.119852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:22.119910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:22.157537   71929 cri.go:89] found id: ""
	I0717 01:57:22.157583   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.157610   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:22.157620   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:22.157687   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:22.196021   71929 cri.go:89] found id: ""
	I0717 01:57:22.196052   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.196062   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:22.196079   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:22.196094   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:22.274350   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:22.274373   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:22.274386   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:22.364363   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:22.364401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.803506   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.306698   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:21.028767   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:23.527943   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:24.529027   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.529064   71603 pod_ready.go:81] duration metric: took 39.50788355s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.529078   71603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534655   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.534680   71603 pod_ready.go:81] duration metric: took 5.594492ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534691   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539602   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.539622   71603 pod_ready.go:81] duration metric: took 4.923891ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539631   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544475   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.544516   71603 pod_ready.go:81] duration metric: took 4.862078ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544532   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549173   71603 pod_ready.go:92] pod "kube-proxy-zbqhw" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.549193   71603 pod_ready.go:81] duration metric: took 4.653986ms for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549203   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925916   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.925944   71603 pod_ready.go:81] duration metric: took 376.73343ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925959   71603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:22.779802   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:25.280281   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.410052   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:22.410092   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:22.462289   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:22.462326   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.978560   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:24.992533   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:24.992601   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:25.027708   71929 cri.go:89] found id: ""
	I0717 01:57:25.027746   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.027754   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:25.027760   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:25.027809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:25.066946   71929 cri.go:89] found id: ""
	I0717 01:57:25.066974   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.066985   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:25.066992   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:25.067051   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:25.107209   71929 cri.go:89] found id: ""
	I0717 01:57:25.107238   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.107248   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:25.107254   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:25.107300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:25.141548   71929 cri.go:89] found id: ""
	I0717 01:57:25.141577   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.141587   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:25.141594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:25.141652   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:25.175822   71929 cri.go:89] found id: ""
	I0717 01:57:25.175853   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.175861   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:25.175866   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:25.175917   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:25.215672   71929 cri.go:89] found id: ""
	I0717 01:57:25.215705   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.215718   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:25.215726   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:25.215786   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:25.260392   71929 cri.go:89] found id: ""
	I0717 01:57:25.260422   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.260434   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:25.260442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:25.260510   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:25.309953   71929 cri.go:89] found id: ""
	I0717 01:57:25.309981   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.309990   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:25.309999   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:25.310013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:25.414204   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:25.414229   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:25.414244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:25.501849   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:25.501883   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:25.545129   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:25.545163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:25.599948   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:25.599984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.803870   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.302993   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:26.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.932999   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.280455   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:29.778817   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.115776   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:28.129710   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:28.129776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:28.165380   71929 cri.go:89] found id: ""
	I0717 01:57:28.165409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.165419   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:28.165425   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:28.165473   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:28.199225   71929 cri.go:89] found id: ""
	I0717 01:57:28.199251   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.199259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:28.199264   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:28.199314   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:28.235564   71929 cri.go:89] found id: ""
	I0717 01:57:28.235585   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.235593   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:28.235598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:28.235649   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:28.270377   71929 cri.go:89] found id: ""
	I0717 01:57:28.270409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.270427   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:28.270435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:28.270488   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:28.310132   71929 cri.go:89] found id: ""
	I0717 01:57:28.310156   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.310163   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:28.310168   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:28.310222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:28.347590   71929 cri.go:89] found id: ""
	I0717 01:57:28.347619   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.347630   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:28.347638   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:28.347696   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:28.387953   71929 cri.go:89] found id: ""
	I0717 01:57:28.387988   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.388001   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:28.388010   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:28.388072   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:28.428788   71929 cri.go:89] found id: ""
	I0717 01:57:28.428811   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.428818   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:28.428826   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:28.428838   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:28.487411   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:28.487465   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:28.501121   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:28.501152   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:28.576296   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:28.576320   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:28.576335   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:28.660246   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:28.660288   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:31.201238   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:31.221132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:31.221192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:31.279839   71929 cri.go:89] found id: ""
	I0717 01:57:31.279867   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.279876   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:31.279884   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:31.279943   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:31.359764   71929 cri.go:89] found id: ""
	I0717 01:57:31.359796   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.359807   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:31.359814   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:31.359873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:31.397045   71929 cri.go:89] found id: ""
	I0717 01:57:31.397077   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.397087   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:31.397094   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:31.397157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:31.441356   71929 cri.go:89] found id: ""
	I0717 01:57:31.441388   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.441397   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:31.441404   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:31.441459   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:31.484014   71929 cri.go:89] found id: ""
	I0717 01:57:31.484040   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.484053   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:31.484060   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:31.484124   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:31.520686   71929 cri.go:89] found id: ""
	I0717 01:57:31.520714   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.520725   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:31.520733   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:31.520792   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:31.557300   71929 cri.go:89] found id: ""
	I0717 01:57:31.557326   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.557334   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:31.557339   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:31.557387   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:31.597753   71929 cri.go:89] found id: ""
	I0717 01:57:31.597782   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.597792   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:31.597804   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:31.597818   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:31.656796   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:31.656837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:31.671287   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:31.671311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:31.742752   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:31.742772   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:31.742784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:31.828154   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:31.828186   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:29.303279   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.303332   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.434410   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.778853   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.780535   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:34.368947   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:34.384323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:34.384402   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:34.421138   71929 cri.go:89] found id: ""
	I0717 01:57:34.421171   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.421182   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:34.421190   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:34.421263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:34.459077   71929 cri.go:89] found id: ""
	I0717 01:57:34.459105   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.459116   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:34.459123   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:34.459180   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:34.492987   71929 cri.go:89] found id: ""
	I0717 01:57:34.493016   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.493027   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:34.493038   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:34.493098   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:34.527801   71929 cri.go:89] found id: ""
	I0717 01:57:34.527827   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.527836   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:34.527841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:34.527890   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:34.562877   71929 cri.go:89] found id: ""
	I0717 01:57:34.562904   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.562914   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:34.562921   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:34.562981   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:34.599387   71929 cri.go:89] found id: ""
	I0717 01:57:34.599409   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.599417   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:34.599423   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:34.599479   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:34.636087   71929 cri.go:89] found id: ""
	I0717 01:57:34.636118   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.636126   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:34.636132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:34.636194   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:34.673168   71929 cri.go:89] found id: ""
	I0717 01:57:34.673196   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.673206   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:34.673214   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:34.673226   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:34.712833   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:34.712864   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:34.765926   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:34.765959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:34.780024   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:34.780049   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:34.863080   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:34.863106   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:34.863122   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:33.803621   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.306114   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:35.933050   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.432520   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.280143   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.779168   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:37.446644   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:37.463015   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:37.463090   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:37.499563   71929 cri.go:89] found id: ""
	I0717 01:57:37.499592   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.499601   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:37.499607   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:37.499663   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:37.538516   71929 cri.go:89] found id: ""
	I0717 01:57:37.538543   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.538572   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:37.538579   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:37.538638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:37.577032   71929 cri.go:89] found id: ""
	I0717 01:57:37.577061   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.577068   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:37.577074   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:37.577129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:37.613534   71929 cri.go:89] found id: ""
	I0717 01:57:37.613563   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.613574   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:37.613582   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:37.613646   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:37.651346   71929 cri.go:89] found id: ""
	I0717 01:57:37.651370   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.651381   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:37.651389   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:37.651451   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:37.685949   71929 cri.go:89] found id: ""
	I0717 01:57:37.685989   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.686001   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:37.686008   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:37.686068   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:37.721706   71929 cri.go:89] found id: ""
	I0717 01:57:37.721744   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.721752   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:37.721759   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:37.721812   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:37.758948   71929 cri.go:89] found id: ""
	I0717 01:57:37.758976   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.758985   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:37.758994   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:37.759005   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:37.835305   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:37.835334   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:37.835349   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:37.916627   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:37.916660   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:37.956819   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:37.956851   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.007596   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:38.007641   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.522573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:40.536850   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:40.536924   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:40.576172   71929 cri.go:89] found id: ""
	I0717 01:57:40.576200   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.576211   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:40.576218   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:40.576277   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:40.611926   71929 cri.go:89] found id: ""
	I0717 01:57:40.611958   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.611969   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:40.611976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:40.612039   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:40.647225   71929 cri.go:89] found id: ""
	I0717 01:57:40.647251   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.647259   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:40.647265   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:40.647315   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:40.683871   71929 cri.go:89] found id: ""
	I0717 01:57:40.683902   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.683917   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:40.683925   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:40.683999   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:40.720941   71929 cri.go:89] found id: ""
	I0717 01:57:40.720971   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.720982   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:40.720989   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:40.721053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:40.756695   71929 cri.go:89] found id: ""
	I0717 01:57:40.756728   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.756739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:40.756746   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:40.756801   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:40.794181   71929 cri.go:89] found id: ""
	I0717 01:57:40.794214   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.794221   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:40.794226   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:40.794281   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:40.830361   71929 cri.go:89] found id: ""
	I0717 01:57:40.830396   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.830407   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:40.830417   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:40.830436   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.844827   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:40.844849   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:40.913003   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:40.913021   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:40.913035   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:40.996314   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:40.996348   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:41.041120   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:41.041151   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.801850   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.802727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:42.802814   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.934130   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.432799   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.780350   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.279971   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.593226   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:43.606395   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:43.606461   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:43.646260   71929 cri.go:89] found id: ""
	I0717 01:57:43.646290   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.646302   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:43.646310   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:43.646368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:43.681148   71929 cri.go:89] found id: ""
	I0717 01:57:43.681174   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.681182   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:43.681189   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:43.681250   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:43.716568   71929 cri.go:89] found id: ""
	I0717 01:57:43.716595   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.716606   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:43.716613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:43.716675   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:43.750507   71929 cri.go:89] found id: ""
	I0717 01:57:43.750536   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.750558   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:43.750566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:43.750627   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:43.787207   71929 cri.go:89] found id: ""
	I0717 01:57:43.787234   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.787244   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:43.787251   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:43.787311   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:43.822997   71929 cri.go:89] found id: ""
	I0717 01:57:43.823034   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.823045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:43.823052   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:43.823118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:43.860605   71929 cri.go:89] found id: ""
	I0717 01:57:43.860632   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.860640   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:43.860646   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:43.860702   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:43.897419   71929 cri.go:89] found id: ""
	I0717 01:57:43.897451   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.897463   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:43.897473   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:43.897492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:43.956361   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:43.956393   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:43.971077   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:43.971104   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:44.045234   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:44.045258   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:44.045275   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:44.122508   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:44.122544   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:46.660516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:46.675555   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:46.675651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:46.709264   71929 cri.go:89] found id: ""
	I0717 01:57:46.709291   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.709300   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:46.709306   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:46.709362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:46.744865   71929 cri.go:89] found id: ""
	I0717 01:57:46.744898   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.744908   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:46.744915   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:46.744971   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:46.785837   71929 cri.go:89] found id: ""
	I0717 01:57:46.785860   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.785870   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:46.785878   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:46.785932   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:46.828801   71929 cri.go:89] found id: ""
	I0717 01:57:46.828832   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.828842   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:46.828849   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:46.828907   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:46.863122   71929 cri.go:89] found id: ""
	I0717 01:57:46.863151   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.863162   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:46.863175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:46.863232   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:46.900705   71929 cri.go:89] found id: ""
	I0717 01:57:46.900731   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.900739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:46.900744   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:46.900790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:46.935774   71929 cri.go:89] found id: ""
	I0717 01:57:46.935816   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.935829   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:46.935840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:46.935895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:46.969274   71929 cri.go:89] found id: ""
	I0717 01:57:46.969304   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.969315   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:46.969325   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:46.969339   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:47.040318   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:47.040343   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:47.040358   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:47.119920   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:47.119954   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:47.168818   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:47.168847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:47.221983   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:47.222034   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:45.303812   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.304051   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.433020   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.932755   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.936075   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.780328   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.781850   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.736564   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:49.749966   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:49.750025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:49.788294   71929 cri.go:89] found id: ""
	I0717 01:57:49.788321   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.788332   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:49.788339   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:49.788396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:49.826406   71929 cri.go:89] found id: ""
	I0717 01:57:49.826431   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.826440   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:49.826445   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:49.826491   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:49.864978   71929 cri.go:89] found id: ""
	I0717 01:57:49.865005   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.865015   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:49.865020   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:49.865074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:49.901238   71929 cri.go:89] found id: ""
	I0717 01:57:49.901270   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.901281   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:49.901300   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:49.901366   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:49.937035   71929 cri.go:89] found id: ""
	I0717 01:57:49.937058   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.937065   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:49.937070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:49.937207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:49.977793   71929 cri.go:89] found id: ""
	I0717 01:57:49.977816   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.977823   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:49.977828   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:49.977873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:50.012915   71929 cri.go:89] found id: ""
	I0717 01:57:50.012942   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.012952   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:50.012959   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:50.013025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:50.049085   71929 cri.go:89] found id: ""
	I0717 01:57:50.049115   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.049127   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:50.049138   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:50.049156   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:50.087521   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:50.087549   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:50.140934   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:50.140978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:50.156001   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:50.156033   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:50.231780   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:50.231811   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:50.231835   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:49.802916   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:51.803036   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.432307   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.432384   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.278585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.279641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.810064   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:52.823442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:52.823508   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:52.860753   71929 cri.go:89] found id: ""
	I0717 01:57:52.860778   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.860789   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:52.860797   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:52.860852   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:52.896264   71929 cri.go:89] found id: ""
	I0717 01:57:52.896289   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.896297   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:52.896303   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:52.896349   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:52.932613   71929 cri.go:89] found id: ""
	I0717 01:57:52.932640   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.932649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:52.932657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:52.932722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:52.969691   71929 cri.go:89] found id: ""
	I0717 01:57:52.969720   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.969728   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:52.969734   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:52.969788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:53.007039   71929 cri.go:89] found id: ""
	I0717 01:57:53.007067   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.007075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:53.007081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:53.007135   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:53.047736   71929 cri.go:89] found id: ""
	I0717 01:57:53.047762   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.047772   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:53.047778   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:53.047838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:53.083192   71929 cri.go:89] found id: ""
	I0717 01:57:53.083216   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.083225   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:53.083230   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:53.083276   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:53.118509   71929 cri.go:89] found id: ""
	I0717 01:57:53.118536   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.118545   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:53.118564   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:53.118589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:53.203003   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:53.203039   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:53.244602   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:53.244627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:53.295180   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:53.295216   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:53.310777   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:53.310805   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:53.389412   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:55.890450   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:55.903768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:55.903843   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:55.944148   71929 cri.go:89] found id: ""
	I0717 01:57:55.944171   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.944179   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:55.944185   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:55.944231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:55.979945   71929 cri.go:89] found id: ""
	I0717 01:57:55.979970   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.979980   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:55.979987   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:55.980045   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:56.019057   71929 cri.go:89] found id: ""
	I0717 01:57:56.019089   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.019100   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:56.019107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:56.019162   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:56.054343   71929 cri.go:89] found id: ""
	I0717 01:57:56.054369   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.054378   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:56.054383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:56.054434   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:56.091150   71929 cri.go:89] found id: ""
	I0717 01:57:56.091179   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.091189   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:56.091197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:56.091256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:56.127502   71929 cri.go:89] found id: ""
	I0717 01:57:56.127528   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.127538   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:56.127547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:56.127602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:56.167935   71929 cri.go:89] found id: ""
	I0717 01:57:56.167961   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.167972   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:56.167979   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:56.168048   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:56.209501   71929 cri.go:89] found id: ""
	I0717 01:57:56.209527   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.209537   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:56.209547   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:56.209561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:56.257989   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:56.258023   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:56.272491   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:56.272519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:56.361622   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:56.361653   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:56.361668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:56.442953   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:56.442992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:54.302376   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.303297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.933123   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.933242   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.280399   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.779285   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.983914   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:58.997215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:58.997292   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:59.032937   71929 cri.go:89] found id: ""
	I0717 01:57:59.032964   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.032980   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:59.032996   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:59.033057   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:59.067790   71929 cri.go:89] found id: ""
	I0717 01:57:59.067811   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.067819   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:59.067825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:59.067881   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:59.107659   71929 cri.go:89] found id: ""
	I0717 01:57:59.107689   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.107699   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:59.107705   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:59.107754   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:59.150134   71929 cri.go:89] found id: ""
	I0717 01:57:59.150158   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.150168   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:59.150175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:59.150235   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:59.192351   71929 cri.go:89] found id: ""
	I0717 01:57:59.192381   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.192391   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:59.192398   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:59.192460   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:59.228177   71929 cri.go:89] found id: ""
	I0717 01:57:59.228202   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.228209   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:59.228215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:59.228261   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:59.267016   71929 cri.go:89] found id: ""
	I0717 01:57:59.267043   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.267052   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:59.267058   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:59.267109   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:59.302235   71929 cri.go:89] found id: ""
	I0717 01:57:59.302257   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.302263   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:59.302273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:59.302285   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:59.368453   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:59.368492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:59.383375   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:59.383399   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:59.454946   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:59.454975   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:59.454992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:59.539576   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:59.539609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:02.085516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:02.099848   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:02.099909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:02.136835   71929 cri.go:89] found id: ""
	I0717 01:58:02.136859   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.136867   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:02.136872   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:02.136928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:02.175304   71929 cri.go:89] found id: ""
	I0717 01:58:02.175331   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.175338   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:02.175344   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:02.175389   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:02.210922   71929 cri.go:89] found id: ""
	I0717 01:58:02.210947   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.210955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:02.210961   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:02.211018   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:02.246952   71929 cri.go:89] found id: ""
	I0717 01:58:02.246983   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.246992   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:02.246999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:02.247053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:02.284857   71929 cri.go:89] found id: ""
	I0717 01:58:02.284883   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.284892   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:02.284897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:02.284944   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:02.322941   71929 cri.go:89] found id: ""
	I0717 01:58:02.322978   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.322999   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:02.323007   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:02.323065   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:02.357904   71929 cri.go:89] found id: ""
	I0717 01:58:02.357932   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.357943   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:02.357950   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:02.358012   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:02.392291   71929 cri.go:89] found id: ""
	I0717 01:58:02.392315   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.392322   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:02.392331   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:02.392346   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:58.802622   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.303663   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.433212   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:03.433962   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:00.779479   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.779619   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.279590   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.447670   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:02.447704   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:02.462259   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:02.462284   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:02.534304   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:02.534332   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:02.534347   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:02.612757   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:02.612799   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.153573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:05.166702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:05.166775   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:05.205213   71929 cri.go:89] found id: ""
	I0717 01:58:05.205238   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.205247   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:05.205252   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:05.205305   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:05.242021   71929 cri.go:89] found id: ""
	I0717 01:58:05.242048   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.242057   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:05.242063   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:05.242118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:05.281862   71929 cri.go:89] found id: ""
	I0717 01:58:05.281889   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.281900   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:05.281908   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:05.281967   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:05.318125   71929 cri.go:89] found id: ""
	I0717 01:58:05.318157   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.318169   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:05.318177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:05.318244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:05.352470   71929 cri.go:89] found id: ""
	I0717 01:58:05.352504   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.352516   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:05.352524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:05.352595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:05.386692   71929 cri.go:89] found id: ""
	I0717 01:58:05.386722   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.386733   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:05.386741   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:05.386803   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:05.426676   71929 cri.go:89] found id: ""
	I0717 01:58:05.426731   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.426744   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:05.426751   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:05.426811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:05.467974   71929 cri.go:89] found id: ""
	I0717 01:58:05.468000   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.468010   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:05.468020   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:05.468036   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.506769   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:05.506797   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:05.561745   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:05.561782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:05.576743   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:05.576775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:05.652856   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:05.652887   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:05.652903   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:03.304109   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.803632   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.434411   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.931796   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.932902   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.779196   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.779591   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:08.244185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:08.257343   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:08.257420   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:08.297136   71929 cri.go:89] found id: ""
	I0717 01:58:08.297163   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.297174   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:08.297181   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:08.297237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:08.336099   71929 cri.go:89] found id: ""
	I0717 01:58:08.336121   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.336129   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:08.336135   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:08.336185   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:08.369668   71929 cri.go:89] found id: ""
	I0717 01:58:08.369690   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.369698   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:08.369706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:08.369756   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:08.405140   71929 cri.go:89] found id: ""
	I0717 01:58:08.405171   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.405179   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:08.405186   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:08.405249   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:08.446296   71929 cri.go:89] found id: ""
	I0717 01:58:08.446319   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.446326   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:08.446331   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:08.446377   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:08.483004   71929 cri.go:89] found id: ""
	I0717 01:58:08.483042   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.483062   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:08.483070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:08.483139   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:08.520668   71929 cri.go:89] found id: ""
	I0717 01:58:08.520699   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.520710   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:08.520717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:08.520776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:08.554711   71929 cri.go:89] found id: ""
	I0717 01:58:08.554734   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.554744   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:08.554752   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:08.554763   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:08.606972   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:08.607004   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:08.621102   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:08.621134   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:08.690424   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:08.690443   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:08.690454   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:08.775151   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:08.775193   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:11.318471   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:11.331875   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:11.331954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:11.375766   71929 cri.go:89] found id: ""
	I0717 01:58:11.375787   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.375795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:11.375801   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:11.375863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:11.417043   71929 cri.go:89] found id: ""
	I0717 01:58:11.417080   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.417103   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:11.417111   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:11.417169   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:11.459462   71929 cri.go:89] found id: ""
	I0717 01:58:11.459487   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.459495   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:11.459500   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:11.459551   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:11.516500   71929 cri.go:89] found id: ""
	I0717 01:58:11.516525   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.516533   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:11.516539   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:11.516590   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:11.573916   71929 cri.go:89] found id: ""
	I0717 01:58:11.573961   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.575159   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:11.575201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:11.575275   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:11.619446   71929 cri.go:89] found id: ""
	I0717 01:58:11.619477   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.619489   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:11.619497   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:11.619558   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:11.654766   71929 cri.go:89] found id: ""
	I0717 01:58:11.654793   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.654802   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:11.654807   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:11.654859   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:11.690306   71929 cri.go:89] found id: ""
	I0717 01:58:11.690335   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.690346   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:11.690354   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:11.690366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:11.744470   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:11.744516   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:11.758824   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:11.758856   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:11.841028   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:11.841058   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:11.841076   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:11.923299   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:11.923351   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:08.303010   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:10.303678   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.803090   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:11.933148   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.433109   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.280292   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.281580   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.466666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:14.479676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:14.479740   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:14.517890   71929 cri.go:89] found id: ""
	I0717 01:58:14.517919   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.517931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:14.517938   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:14.517998   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:14.552891   71929 cri.go:89] found id: ""
	I0717 01:58:14.552918   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.552926   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:14.552931   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:14.552992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:14.593571   71929 cri.go:89] found id: ""
	I0717 01:58:14.593596   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.593604   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:14.593609   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:14.593662   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:14.628869   71929 cri.go:89] found id: ""
	I0717 01:58:14.628897   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.628907   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:14.628913   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:14.628972   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:14.663558   71929 cri.go:89] found id: ""
	I0717 01:58:14.663586   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.663593   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:14.663599   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:14.663644   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:14.700788   71929 cri.go:89] found id: ""
	I0717 01:58:14.700824   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.700834   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:14.700843   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:14.700903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:14.737975   71929 cri.go:89] found id: ""
	I0717 01:58:14.738014   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.738025   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:14.738032   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:14.738091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:14.775419   71929 cri.go:89] found id: ""
	I0717 01:58:14.775443   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.775453   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:14.775465   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:14.775479   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:14.817635   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:14.817661   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:14.870667   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:14.870705   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:14.885208   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:14.885235   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:14.962286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:14.962318   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:14.962334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:14.803624   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.303944   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.434108   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.934577   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.779538   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.780694   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.537546   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:17.550258   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:17.550322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:17.586251   71929 cri.go:89] found id: ""
	I0717 01:58:17.586278   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.586286   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:17.586292   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:17.586348   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:17.620903   71929 cri.go:89] found id: ""
	I0717 01:58:17.620927   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.620935   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:17.620941   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:17.620992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:17.659292   71929 cri.go:89] found id: ""
	I0717 01:58:17.659319   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.659328   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:17.659334   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:17.659384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:17.695603   71929 cri.go:89] found id: ""
	I0717 01:58:17.695632   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.695642   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:17.695650   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:17.695711   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:17.731943   71929 cri.go:89] found id: ""
	I0717 01:58:17.731970   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.731978   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:17.731984   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:17.732041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:17.767257   71929 cri.go:89] found id: ""
	I0717 01:58:17.767284   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.767293   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:17.767299   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:17.767357   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:17.802455   71929 cri.go:89] found id: ""
	I0717 01:58:17.802495   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.802508   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:17.802516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:17.802602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:17.839321   71929 cri.go:89] found id: ""
	I0717 01:58:17.839351   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.839362   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:17.839374   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:17.839391   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:17.912269   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:17.912295   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:17.912311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:17.990005   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:17.990038   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:18.029933   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:18.029960   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:18.081941   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:18.081977   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:20.597325   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:20.611835   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:20.611901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:20.647899   71929 cri.go:89] found id: ""
	I0717 01:58:20.647922   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.647931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:20.647936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:20.647984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:20.683783   71929 cri.go:89] found id: ""
	I0717 01:58:20.683816   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.683827   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:20.683834   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:20.683892   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:20.721803   71929 cri.go:89] found id: ""
	I0717 01:58:20.721833   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.721844   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:20.721851   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:20.721910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:20.756148   71929 cri.go:89] found id: ""
	I0717 01:58:20.756177   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.756189   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:20.756196   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:20.756259   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:20.795976   71929 cri.go:89] found id: ""
	I0717 01:58:20.796014   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.796028   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:20.796036   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:20.796095   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:20.833775   71929 cri.go:89] found id: ""
	I0717 01:58:20.833805   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.833816   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:20.833824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:20.833891   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:20.869138   71929 cri.go:89] found id: ""
	I0717 01:58:20.869163   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.869173   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:20.869180   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:20.869237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:20.904865   71929 cri.go:89] found id: ""
	I0717 01:58:20.904893   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.904901   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:20.904910   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:20.904920   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:20.947268   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:20.947294   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:20.998541   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:20.998582   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:21.013797   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:21.013828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:21.085101   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:21.085127   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:21.085141   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:19.804949   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:22.304273   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.436176   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.933548   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.279177   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.279599   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:25.279899   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.667361   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:23.681768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:23.681828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:23.717721   71929 cri.go:89] found id: ""
	I0717 01:58:23.717748   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.717757   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:23.717763   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:23.717827   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:23.752699   71929 cri.go:89] found id: ""
	I0717 01:58:23.752728   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.752738   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:23.752745   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:23.752809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:23.790914   71929 cri.go:89] found id: ""
	I0717 01:58:23.790944   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.790955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:23.790962   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:23.791021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:23.827253   71929 cri.go:89] found id: ""
	I0717 01:58:23.827276   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.827285   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:23.827338   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:23.827392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:23.864466   71929 cri.go:89] found id: ""
	I0717 01:58:23.864510   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.864520   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:23.864527   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:23.864577   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:23.900734   71929 cri.go:89] found id: ""
	I0717 01:58:23.900775   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.900786   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:23.900794   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:23.900855   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:23.937212   71929 cri.go:89] found id: ""
	I0717 01:58:23.937236   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.937243   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:23.937249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:23.937304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:23.973730   71929 cri.go:89] found id: ""
	I0717 01:58:23.973755   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.973764   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:23.973774   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:23.973786   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:24.026122   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:24.026163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:24.040755   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:24.040784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:24.112224   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:24.112254   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:24.112277   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:24.195247   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:24.195281   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:26.738042   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:26.751545   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:26.751602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:26.786778   71929 cri.go:89] found id: ""
	I0717 01:58:26.786813   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.786824   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:26.786831   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:26.786889   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:26.828776   71929 cri.go:89] found id: ""
	I0717 01:58:26.828806   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.828818   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:26.828825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:26.828887   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:26.868439   71929 cri.go:89] found id: ""
	I0717 01:58:26.868468   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.868479   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:26.868486   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:26.868546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:26.900249   71929 cri.go:89] found id: ""
	I0717 01:58:26.900282   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.900292   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:26.900297   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:26.900344   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:26.933763   71929 cri.go:89] found id: ""
	I0717 01:58:26.933798   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.933808   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:26.933816   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:26.933882   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:26.968681   71929 cri.go:89] found id: ""
	I0717 01:58:26.968712   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.968722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:26.968729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:26.968788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:27.002081   71929 cri.go:89] found id: ""
	I0717 01:58:27.002113   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.002128   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:27.002135   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:27.002196   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:27.035138   71929 cri.go:89] found id: ""
	I0717 01:58:27.035161   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.035170   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:27.035177   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:27.035189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:27.091207   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:27.091244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:27.105765   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:27.105793   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:27.175533   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:27.175563   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:27.175580   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:27.260903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:27.260951   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:24.802002   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.803330   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.432259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:28.433226   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:27.280206   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.781139   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.802451   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:29.816503   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:29.816573   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:29.854887   71929 cri.go:89] found id: ""
	I0717 01:58:29.854921   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.854931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:29.854936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:29.854983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:29.887529   71929 cri.go:89] found id: ""
	I0717 01:58:29.887559   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.887570   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:29.887577   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:29.887638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:29.924995   71929 cri.go:89] found id: ""
	I0717 01:58:29.925020   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.925028   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:29.925034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:29.925091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:29.960064   71929 cri.go:89] found id: ""
	I0717 01:58:29.960092   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.960104   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:29.960111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:29.960178   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:29.995408   71929 cri.go:89] found id: ""
	I0717 01:58:29.995431   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.995438   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:29.995443   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:29.995494   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:30.028219   71929 cri.go:89] found id: ""
	I0717 01:58:30.028247   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.028254   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:30.028260   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:30.028309   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:30.062529   71929 cri.go:89] found id: ""
	I0717 01:58:30.062576   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.062589   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:30.062597   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:30.062664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:30.095854   71929 cri.go:89] found id: ""
	I0717 01:58:30.095882   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.095893   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:30.095904   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:30.095919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:30.148083   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:30.148114   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:30.161861   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:30.161892   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:30.236474   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:30.236503   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:30.236519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:30.319691   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:30.319720   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:28.804656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:31.302637   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:30.932659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.934225   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.279141   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:34.279312   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.867821   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:32.881480   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:32.881541   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:32.918289   71929 cri.go:89] found id: ""
	I0717 01:58:32.918316   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.918327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:32.918335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:32.918396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:32.955383   71929 cri.go:89] found id: ""
	I0717 01:58:32.955417   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.955426   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:32.955433   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:32.955498   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:32.990432   71929 cri.go:89] found id: ""
	I0717 01:58:32.990460   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.990467   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:32.990472   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:32.990531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:33.034653   71929 cri.go:89] found id: ""
	I0717 01:58:33.034685   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.034697   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:33.034703   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:33.034763   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:33.077875   71929 cri.go:89] found id: ""
	I0717 01:58:33.077911   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.077919   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:33.077926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:33.077988   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:33.114800   71929 cri.go:89] found id: ""
	I0717 01:58:33.114840   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.114852   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:33.114864   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:33.114946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:33.151095   71929 cri.go:89] found id: ""
	I0717 01:58:33.151229   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.151242   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:33.151249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:33.151324   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:33.190100   71929 cri.go:89] found id: ""
	I0717 01:58:33.190128   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.190138   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:33.190149   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:33.190163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:33.271195   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:33.271231   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:33.317539   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:33.317569   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.370188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:33.370224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:33.385016   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:33.385045   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:33.460017   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:35.960499   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:35.974504   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:35.974583   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:36.008652   71929 cri.go:89] found id: ""
	I0717 01:58:36.008696   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.008704   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:36.008710   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:36.008770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:36.044068   71929 cri.go:89] found id: ""
	I0717 01:58:36.044097   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.044106   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:36.044113   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:36.044174   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:36.083572   71929 cri.go:89] found id: ""
	I0717 01:58:36.083602   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.083610   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:36.083616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:36.083682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:36.116716   71929 cri.go:89] found id: ""
	I0717 01:58:36.116744   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.116753   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:36.116761   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:36.116820   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:36.156042   71929 cri.go:89] found id: ""
	I0717 01:58:36.156069   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.156080   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:36.156087   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:36.156148   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:36.192005   71929 cri.go:89] found id: ""
	I0717 01:58:36.192033   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.192045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:36.192055   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:36.192116   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:36.228720   71929 cri.go:89] found id: ""
	I0717 01:58:36.228751   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.228763   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:36.228769   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:36.228817   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:36.263835   71929 cri.go:89] found id: ""
	I0717 01:58:36.263862   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.263872   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:36.263882   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:36.263897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:36.278545   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:36.278609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:36.361182   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:36.361208   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:36.361225   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:36.447797   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:36.447832   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:36.492167   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:36.492196   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.304750   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.803867   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.432659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:37.433360   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.433481   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:36.282525   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:38.779592   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.045613   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:39.058615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:39.058688   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:39.094625   71929 cri.go:89] found id: ""
	I0717 01:58:39.094672   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.094684   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:39.094692   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:39.094755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:39.132856   71929 cri.go:89] found id: ""
	I0717 01:58:39.132887   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.132898   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:39.132905   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:39.132966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:39.171017   71929 cri.go:89] found id: ""
	I0717 01:58:39.171037   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.171044   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:39.171051   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:39.171112   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:39.210146   71929 cri.go:89] found id: ""
	I0717 01:58:39.210176   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.210186   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:39.210193   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:39.210269   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:39.244307   71929 cri.go:89] found id: ""
	I0717 01:58:39.244332   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.244342   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:39.244349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:39.244411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:39.279649   71929 cri.go:89] found id: ""
	I0717 01:58:39.279675   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.279682   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:39.279688   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:39.279755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:39.317699   71929 cri.go:89] found id: ""
	I0717 01:58:39.317726   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.317735   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:39.317742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:39.317789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:39.352319   71929 cri.go:89] found id: ""
	I0717 01:58:39.352351   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.352365   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:39.352377   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:39.352392   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:39.404153   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:39.404188   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:39.419796   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:39.419828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:39.495463   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:39.495485   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:39.495499   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:39.576742   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:39.576795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.132481   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:42.145588   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:42.145658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:42.181231   71929 cri.go:89] found id: ""
	I0717 01:58:42.181257   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.181265   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:42.181270   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:42.181321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:42.216876   71929 cri.go:89] found id: ""
	I0717 01:58:42.216905   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.216917   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:42.216923   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:42.216984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:42.256918   71929 cri.go:89] found id: ""
	I0717 01:58:42.256948   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.256959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:42.256967   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:42.257022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:42.291930   71929 cri.go:89] found id: ""
	I0717 01:58:42.291957   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.291964   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:42.291975   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:42.292035   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:42.329927   71929 cri.go:89] found id: ""
	I0717 01:58:42.329954   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.329964   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:42.329970   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:42.330014   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:42.364041   71929 cri.go:89] found id: ""
	I0717 01:58:42.364072   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.364085   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:42.364093   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:42.364150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:38.302060   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.302711   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.303560   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:41.437100   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.932845   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.780109   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.280118   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.400751   71929 cri.go:89] found id: ""
	I0717 01:58:42.400775   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.400784   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:42.400790   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:42.400840   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:42.438200   71929 cri.go:89] found id: ""
	I0717 01:58:42.438228   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.438240   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:42.438251   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:42.438265   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:42.455268   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:42.455303   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:42.537344   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:42.537368   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:42.537381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:42.618487   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:42.618522   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.661273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:42.661299   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:45.212631   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:45.226247   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:45.226330   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:45.263067   71929 cri.go:89] found id: ""
	I0717 01:58:45.263098   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.263110   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:45.263117   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:45.263177   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:45.299025   71929 cri.go:89] found id: ""
	I0717 01:58:45.299056   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.299067   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:45.299074   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:45.299137   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:45.346828   71929 cri.go:89] found id: ""
	I0717 01:58:45.346858   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.346868   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:45.346876   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:45.346938   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:45.390879   71929 cri.go:89] found id: ""
	I0717 01:58:45.390905   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.390913   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:45.390918   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:45.390966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:45.426794   71929 cri.go:89] found id: ""
	I0717 01:58:45.426823   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.426834   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:45.426841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:45.426902   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:45.463834   71929 cri.go:89] found id: ""
	I0717 01:58:45.463863   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.463873   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:45.463880   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:45.463942   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:45.500660   71929 cri.go:89] found id: ""
	I0717 01:58:45.500689   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.500701   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:45.500708   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:45.500766   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:45.537332   71929 cri.go:89] found id: ""
	I0717 01:58:45.537356   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.537364   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:45.537373   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:45.537388   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:45.551194   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:45.551222   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:45.623863   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:45.623892   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:45.623906   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:45.699740   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:45.699782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:45.739580   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:45.739613   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:44.803138   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:47.302471   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:46.434311   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.933004   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:45.779778   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.279595   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.300789   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:48.315608   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:48.315667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:48.353050   71929 cri.go:89] found id: ""
	I0717 01:58:48.353076   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.353084   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:48.353089   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:48.353133   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:48.394789   71929 cri.go:89] found id: ""
	I0717 01:58:48.394817   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.394829   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:48.394837   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:48.394900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:48.433430   71929 cri.go:89] found id: ""
	I0717 01:58:48.433457   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.433468   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:48.433475   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:48.433530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:48.467215   71929 cri.go:89] found id: ""
	I0717 01:58:48.467243   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.467254   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:48.467262   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:48.467318   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:48.501087   71929 cri.go:89] found id: ""
	I0717 01:58:48.501120   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.501131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:48.501138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:48.501204   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:48.538648   71929 cri.go:89] found id: ""
	I0717 01:58:48.538683   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.538696   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:48.538706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:48.538762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:48.573006   71929 cri.go:89] found id: ""
	I0717 01:58:48.573030   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.573040   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:48.573047   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:48.573106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:48.608779   71929 cri.go:89] found id: ""
	I0717 01:58:48.608803   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.608813   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:48.608824   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:48.608837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:48.659250   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:48.659290   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:48.673418   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:48.673449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:48.748175   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:48.748196   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:48.748207   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:48.824238   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:48.824274   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:51.367155   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:51.382458   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:51.382527   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:51.424005   71929 cri.go:89] found id: ""
	I0717 01:58:51.424040   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.424051   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:51.424059   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:51.424117   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:51.463318   71929 cri.go:89] found id: ""
	I0717 01:58:51.463348   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.463357   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:51.463363   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:51.463414   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:51.502261   71929 cri.go:89] found id: ""
	I0717 01:58:51.502290   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.502301   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:51.502309   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:51.502362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:51.536277   71929 cri.go:89] found id: ""
	I0717 01:58:51.536308   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.536319   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:51.536327   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:51.536392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:51.580598   71929 cri.go:89] found id: ""
	I0717 01:58:51.580629   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.580640   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:51.580648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:51.580726   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:51.618666   71929 cri.go:89] found id: ""
	I0717 01:58:51.618690   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.618697   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:51.618702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:51.618747   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:51.654742   71929 cri.go:89] found id: ""
	I0717 01:58:51.654777   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.654790   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:51.654799   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:51.654863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:51.698006   71929 cri.go:89] found id: ""
	I0717 01:58:51.698034   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.698043   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:51.698051   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:51.698062   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:51.754812   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:51.754852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:51.771887   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:51.771919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:51.859627   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:51.859657   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:51.859675   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:51.946633   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:51.946673   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:49.302540   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.803884   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.433981   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.933306   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:50.781428   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.279780   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:54.494188   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:54.509111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:54.509190   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:54.546424   71929 cri.go:89] found id: ""
	I0717 01:58:54.546454   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.546464   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:54.546471   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:54.546532   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:54.586811   71929 cri.go:89] found id: ""
	I0717 01:58:54.586841   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.586853   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:54.586860   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:54.586918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:54.627350   71929 cri.go:89] found id: ""
	I0717 01:58:54.627375   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.627383   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:54.627388   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:54.627438   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:54.665901   71929 cri.go:89] found id: ""
	I0717 01:58:54.665941   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.665954   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:54.665974   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:54.666041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:54.702921   71929 cri.go:89] found id: ""
	I0717 01:58:54.702948   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.702958   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:54.702965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:54.703027   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:54.737378   71929 cri.go:89] found id: ""
	I0717 01:58:54.737406   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.737414   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:54.737421   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:54.737469   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:54.771924   71929 cri.go:89] found id: ""
	I0717 01:58:54.771954   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.771964   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:54.771971   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:54.772055   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:54.812939   71929 cri.go:89] found id: ""
	I0717 01:58:54.812972   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.812983   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:54.812995   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:54.813010   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:54.862979   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:54.863013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:54.877467   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:54.877504   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:54.953924   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:54.953950   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:54.953963   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:55.032019   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:55.032052   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:54.302727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:56.311656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.933968   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:58.432611   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.778263   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.781311   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.278937   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.573130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:57.591689   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:57.591762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:57.626444   71929 cri.go:89] found id: ""
	I0717 01:58:57.626469   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.626479   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:57.626486   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:57.626570   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:57.661280   71929 cri.go:89] found id: ""
	I0717 01:58:57.661305   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.661314   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:57.661321   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:57.661376   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:57.695678   71929 cri.go:89] found id: ""
	I0717 01:58:57.695703   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.695711   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:57.695717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:57.695762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:57.729705   71929 cri.go:89] found id: ""
	I0717 01:58:57.729734   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.729742   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:57.729748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:57.729804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:57.763338   71929 cri.go:89] found id: ""
	I0717 01:58:57.763365   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.763373   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:57.763387   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:57.763433   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:57.800576   71929 cri.go:89] found id: ""
	I0717 01:58:57.800600   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.800608   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:57.800615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:57.800701   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:57.842401   71929 cri.go:89] found id: ""
	I0717 01:58:57.842428   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.842439   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:57.842446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:57.842503   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:57.880355   71929 cri.go:89] found id: ""
	I0717 01:58:57.880379   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.880387   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:57.880395   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:57.880412   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:57.938215   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:57.938252   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:57.952835   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:57.952876   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:58.027203   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:58.027231   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:58.027246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:58.108442   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:58.108483   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:00.648580   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:00.662596   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:00.662667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:00.696315   71929 cri.go:89] found id: ""
	I0717 01:59:00.696342   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.696351   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:00.696356   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:00.696411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:00.732117   71929 cri.go:89] found id: ""
	I0717 01:59:00.732147   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.732158   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:00.732164   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:00.732212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:00.768747   71929 cri.go:89] found id: ""
	I0717 01:59:00.768779   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.768790   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:00.768797   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:00.768856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:00.807557   71929 cri.go:89] found id: ""
	I0717 01:59:00.807585   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.807592   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:00.807598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:00.807651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:00.844127   71929 cri.go:89] found id: ""
	I0717 01:59:00.844152   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.844161   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:00.844166   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:00.844222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:00.879565   71929 cri.go:89] found id: ""
	I0717 01:59:00.879590   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.879597   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:00.879613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:00.879684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:00.917352   71929 cri.go:89] found id: ""
	I0717 01:59:00.917379   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.917387   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:00.917393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:00.917440   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:00.952603   71929 cri.go:89] found id: ""
	I0717 01:59:00.952630   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.952637   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:00.952647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:00.952688   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:01.007203   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:01.007242   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:01.021476   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:01.021512   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:01.102283   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:01.102306   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:01.102320   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:01.175736   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:01.175771   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:58.803034   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.803718   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.932781   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.433188   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:02.281269   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:04.779257   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.717612   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:03.732446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:03.732511   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:03.767485   71929 cri.go:89] found id: ""
	I0717 01:59:03.767519   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.767533   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:03.767542   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:03.767607   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:03.803961   71929 cri.go:89] found id: ""
	I0717 01:59:03.803989   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.804000   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:03.804007   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:03.804074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:03.842734   71929 cri.go:89] found id: ""
	I0717 01:59:03.842768   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.842780   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:03.842788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:03.842915   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:03.883571   71929 cri.go:89] found id: ""
	I0717 01:59:03.883598   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.883608   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:03.883616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:03.883682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:03.922037   71929 cri.go:89] found id: ""
	I0717 01:59:03.922065   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.922076   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:03.922084   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:03.922143   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:03.961135   71929 cri.go:89] found id: ""
	I0717 01:59:03.961165   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.961176   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:03.961183   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:03.961244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:03.995542   71929 cri.go:89] found id: ""
	I0717 01:59:03.995570   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.995580   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:03.995589   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:03.995647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:04.030142   71929 cri.go:89] found id: ""
	I0717 01:59:04.030170   71929 logs.go:276] 0 containers: []
	W0717 01:59:04.030178   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:04.030187   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:04.030198   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:04.110329   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:04.110366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:04.152194   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:04.152224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:04.204012   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:04.204048   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:04.218261   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:04.218291   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:04.290786   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:06.791166   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:06.806662   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:06.806722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:06.841447   71929 cri.go:89] found id: ""
	I0717 01:59:06.841476   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.841486   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:06.841494   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:06.841554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:06.879920   71929 cri.go:89] found id: ""
	I0717 01:59:06.879956   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.879971   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:06.879976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:06.880033   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:06.914436   71929 cri.go:89] found id: ""
	I0717 01:59:06.914465   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.914476   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:06.914484   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:06.914566   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:06.952098   71929 cri.go:89] found id: ""
	I0717 01:59:06.952127   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.952135   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:06.952141   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:06.952187   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:06.988054   71929 cri.go:89] found id: ""
	I0717 01:59:06.988085   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.988096   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:06.988103   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:06.988168   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:07.026633   71929 cri.go:89] found id: ""
	I0717 01:59:07.026658   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.026670   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:07.026676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:07.026732   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:07.064433   71929 cri.go:89] found id: ""
	I0717 01:59:07.064454   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.064463   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:07.064468   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:07.064545   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:07.108352   71929 cri.go:89] found id: ""
	I0717 01:59:07.108385   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.108396   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:07.108410   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:07.108428   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:07.163554   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:07.163593   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:07.177221   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:07.177249   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:07.249712   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:07.249735   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:07.249748   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:07.333011   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:07.333044   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:03.303048   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.304001   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.314317   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.932370   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.933031   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.933728   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:06.780342   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.279683   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.873187   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:09.887579   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:09.887658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:09.923675   71929 cri.go:89] found id: ""
	I0717 01:59:09.923706   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.923716   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:09.923724   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:09.923789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:09.961248   71929 cri.go:89] found id: ""
	I0717 01:59:09.961278   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.961288   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:09.961296   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:09.961354   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:10.000069   71929 cri.go:89] found id: ""
	I0717 01:59:10.000094   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.000101   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:10.000107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:10.000157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:10.036784   71929 cri.go:89] found id: ""
	I0717 01:59:10.036808   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.036815   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:10.036820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:10.036869   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:10.072746   71929 cri.go:89] found id: ""
	I0717 01:59:10.072778   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.072789   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:10.072796   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:10.072856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:10.109520   71929 cri.go:89] found id: ""
	I0717 01:59:10.109544   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.109552   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:10.109557   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:10.109608   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:10.142521   71929 cri.go:89] found id: ""
	I0717 01:59:10.142565   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.142576   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:10.142584   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:10.142647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:10.175772   71929 cri.go:89] found id: ""
	I0717 01:59:10.175800   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.175812   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:10.175823   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:10.175837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:10.213534   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:10.213561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:10.266449   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:10.266485   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:10.282204   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:10.282234   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:10.353974   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:10.354004   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:10.354017   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:09.802047   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.802200   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.433722   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:14.932285   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.780394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:13.781691   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.936509   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:12.951547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:12.951616   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:12.987833   71929 cri.go:89] found id: ""
	I0717 01:59:12.987860   71929 logs.go:276] 0 containers: []
	W0717 01:59:12.987868   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:12.987873   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:12.987922   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:13.026500   71929 cri.go:89] found id: ""
	I0717 01:59:13.026529   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.026539   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:13.026546   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:13.026625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:13.061631   71929 cri.go:89] found id: ""
	I0717 01:59:13.061664   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.061674   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:13.061682   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:13.061745   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:13.099449   71929 cri.go:89] found id: ""
	I0717 01:59:13.099476   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.099487   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:13.099494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:13.099565   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:13.137271   71929 cri.go:89] found id: ""
	I0717 01:59:13.137299   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.137309   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:13.137317   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:13.137384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:13.174432   71929 cri.go:89] found id: ""
	I0717 01:59:13.174462   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.174472   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:13.174478   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:13.174539   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:13.212820   71929 cri.go:89] found id: ""
	I0717 01:59:13.212845   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.212855   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:13.212865   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:13.212930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:13.245961   71929 cri.go:89] found id: ""
	I0717 01:59:13.245993   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.246004   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:13.246014   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:13.246028   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:13.284801   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:13.284828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.338476   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:13.338511   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:13.352751   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:13.352777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:13.434001   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:13.434035   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:13.434050   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.022525   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:16.036863   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:16.036941   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:16.074370   71929 cri.go:89] found id: ""
	I0717 01:59:16.074398   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.074409   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:16.074416   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:16.074476   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:16.112239   71929 cri.go:89] found id: ""
	I0717 01:59:16.112267   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.112276   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:16.112281   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:16.112329   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:16.147398   71929 cri.go:89] found id: ""
	I0717 01:59:16.147422   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.147429   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:16.147435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:16.147490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:16.182112   71929 cri.go:89] found id: ""
	I0717 01:59:16.182141   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.182149   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:16.182155   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:16.182203   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:16.219134   71929 cri.go:89] found id: ""
	I0717 01:59:16.219163   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.219174   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:16.219182   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:16.219238   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:16.255892   71929 cri.go:89] found id: ""
	I0717 01:59:16.255924   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.255934   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:16.255943   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:16.256003   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:16.291202   71929 cri.go:89] found id: ""
	I0717 01:59:16.291228   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.291238   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:16.291245   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:16.291304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:16.330748   71929 cri.go:89] found id: ""
	I0717 01:59:16.330779   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.330790   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:16.330801   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:16.330815   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:16.344628   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:16.344668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:16.415735   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:16.415761   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:16.415775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.499411   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:16.499449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:16.541244   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:16.541270   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.802477   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.311229   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.933493   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.934299   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.279421   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.778998   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:19.095060   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:19.107920   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:19.107976   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:19.143446   71929 cri.go:89] found id: ""
	I0717 01:59:19.143476   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.143485   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:19.143490   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:19.143550   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:19.179216   71929 cri.go:89] found id: ""
	I0717 01:59:19.179247   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.179259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:19.179266   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:19.179317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:19.212468   71929 cri.go:89] found id: ""
	I0717 01:59:19.212498   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.212508   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:19.212516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:19.212574   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:19.245019   71929 cri.go:89] found id: ""
	I0717 01:59:19.245047   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.245058   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:19.245065   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:19.245123   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:19.278430   71929 cri.go:89] found id: ""
	I0717 01:59:19.278457   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.278467   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:19.278474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:19.278530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:19.317685   71929 cri.go:89] found id: ""
	I0717 01:59:19.317714   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.317722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:19.317729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:19.317783   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:19.352938   71929 cri.go:89] found id: ""
	I0717 01:59:19.352974   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.352986   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:19.353000   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:19.353052   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:19.387238   71929 cri.go:89] found id: ""
	I0717 01:59:19.387272   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.387283   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:19.387295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:19.387314   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:19.440138   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:19.440171   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:19.456372   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:19.456402   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:19.527881   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:19.527906   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:19.527921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:19.611903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:19.611937   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:22.160422   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:22.172802   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:22.172862   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:22.209283   71929 cri.go:89] found id: ""
	I0717 01:59:22.209315   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.209327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:22.209335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:22.209396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:22.243927   71929 cri.go:89] found id: ""
	I0717 01:59:22.243955   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.243965   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:22.243972   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:22.244022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:22.276730   71929 cri.go:89] found id: ""
	I0717 01:59:22.276754   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.276761   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:22.276767   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:22.276814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:22.319378   71929 cri.go:89] found id: ""
	I0717 01:59:22.319407   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.319418   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:22.319425   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:22.319482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:22.358272   71929 cri.go:89] found id: ""
	I0717 01:59:22.358298   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.358307   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:22.358312   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:22.358362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:22.395358   71929 cri.go:89] found id: ""
	I0717 01:59:22.395393   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.395405   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:22.395414   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:22.395477   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:18.802881   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.303532   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.433636   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.932345   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.279596   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.279700   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.280649   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:22.435158   71929 cri.go:89] found id: ""
	I0717 01:59:22.435184   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.435194   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:22.435201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:22.435248   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:22.471553   71929 cri.go:89] found id: ""
	I0717 01:59:22.471588   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.471595   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:22.471604   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:22.471616   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:22.523133   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:22.523169   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:22.539212   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:22.539246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:22.615707   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:22.615729   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:22.615744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:22.696758   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:22.696795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:25.238496   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:25.252882   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:25.252946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:25.290173   71929 cri.go:89] found id: ""
	I0717 01:59:25.290197   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.290205   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:25.290210   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:25.290263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:25.325926   71929 cri.go:89] found id: ""
	I0717 01:59:25.325968   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.325979   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:25.325985   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:25.326032   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:25.358310   71929 cri.go:89] found id: ""
	I0717 01:59:25.358362   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.358371   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:25.358377   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:25.358426   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:25.393575   71929 cri.go:89] found id: ""
	I0717 01:59:25.393605   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.393615   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:25.393622   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:25.393684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:25.429357   71929 cri.go:89] found id: ""
	I0717 01:59:25.429448   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.429466   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:25.429474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:25.429546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:25.466992   71929 cri.go:89] found id: ""
	I0717 01:59:25.467020   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.467028   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:25.467034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:25.467080   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:25.503545   71929 cri.go:89] found id: ""
	I0717 01:59:25.503575   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.503587   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:25.503594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:25.503643   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:25.542957   71929 cri.go:89] found id: ""
	I0717 01:59:25.542983   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.542993   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:25.543003   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:25.543015   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:25.598813   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:25.598852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:25.618060   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:25.618098   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:25.690079   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:25.690105   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:25.690119   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:25.765956   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:25.765994   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:23.803366   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.804525   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.932447   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.933276   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.933461   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.286160   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.781318   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:28.311715   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:28.325493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:28.325554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:28.365783   71929 cri.go:89] found id: ""
	I0717 01:59:28.365810   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.365821   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:28.365829   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:28.365885   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:28.401847   71929 cri.go:89] found id: ""
	I0717 01:59:28.401875   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.401883   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:28.401895   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:28.401954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:28.442236   71929 cri.go:89] found id: ""
	I0717 01:59:28.442261   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.442272   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:28.442278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:28.442340   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:28.476832   71929 cri.go:89] found id: ""
	I0717 01:59:28.476857   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.476866   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:28.476873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:28.476928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:28.512040   71929 cri.go:89] found id: ""
	I0717 01:59:28.512068   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.512075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:28.512081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:28.512136   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:28.547516   71929 cri.go:89] found id: ""
	I0717 01:59:28.547547   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.547558   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:28.547566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:28.547625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:28.580380   71929 cri.go:89] found id: ""
	I0717 01:59:28.580406   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.580417   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:28.580427   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:28.580485   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:28.616029   71929 cri.go:89] found id: ""
	I0717 01:59:28.616059   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.616069   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:28.616080   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:28.616095   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:28.670188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:28.670230   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:28.687315   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:28.687355   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:28.763591   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:28.763612   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:28.763627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:28.848925   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:28.848959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:31.388294   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:31.404748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:31.404814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:31.446437   71929 cri.go:89] found id: ""
	I0717 01:59:31.446468   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.446478   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:31.446484   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:31.446531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:31.487797   71929 cri.go:89] found id: ""
	I0717 01:59:31.487828   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.487840   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:31.487847   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:31.487895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:31.525327   71929 cri.go:89] found id: ""
	I0717 01:59:31.525354   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.525368   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:31.525375   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:31.525436   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:31.564106   71929 cri.go:89] found id: ""
	I0717 01:59:31.564154   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.564166   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:31.564173   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:31.564234   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:31.603345   71929 cri.go:89] found id: ""
	I0717 01:59:31.603374   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.603385   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:31.603393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:31.603456   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:31.641727   71929 cri.go:89] found id: ""
	I0717 01:59:31.641753   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.641769   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:31.641776   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:31.641837   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:31.680825   71929 cri.go:89] found id: ""
	I0717 01:59:31.680856   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.680866   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:31.680873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:31.680930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:31.714325   71929 cri.go:89] found id: ""
	I0717 01:59:31.714348   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.714355   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:31.714363   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:31.714374   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:31.765899   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:31.765934   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:31.781417   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:31.781447   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:31.857586   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:31.857607   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:31.857622   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:31.937171   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:31.937197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:28.304014   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:30.802684   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:32.803604   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.933945   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.435259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.785641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.279814   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.478176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:34.492153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:34.492223   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:34.526959   71929 cri.go:89] found id: ""
	I0717 01:59:34.526984   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.526998   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:34.527006   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:34.527064   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:34.564485   71929 cri.go:89] found id: ""
	I0717 01:59:34.564534   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.564546   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:34.564591   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:34.564706   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:34.604611   71929 cri.go:89] found id: ""
	I0717 01:59:34.604637   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.604649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:34.604657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:34.604718   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:34.640851   71929 cri.go:89] found id: ""
	I0717 01:59:34.640882   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.640892   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:34.640897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:34.640956   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:34.675828   71929 cri.go:89] found id: ""
	I0717 01:59:34.675856   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.675863   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:34.675869   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:34.675918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:34.710468   71929 cri.go:89] found id: ""
	I0717 01:59:34.710496   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.710506   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:34.710514   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:34.710595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:34.749218   71929 cri.go:89] found id: ""
	I0717 01:59:34.749249   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.749260   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:34.749267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:34.749328   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:34.784934   71929 cri.go:89] found id: ""
	I0717 01:59:34.784969   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.784979   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:34.784990   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:34.785006   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:34.799836   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:34.799870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:34.870218   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:34.870239   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:34.870254   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:34.948782   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:34.948817   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:34.992295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:34.992324   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:34.803649   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.304530   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.933199   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:39.432504   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.280185   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:38.280499   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.545759   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:37.559648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:37.559724   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:37.596642   71929 cri.go:89] found id: ""
	I0717 01:59:37.596696   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.596707   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:37.596715   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:37.596770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:37.637251   71929 cri.go:89] found id: ""
	I0717 01:59:37.637283   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.637312   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:37.637318   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:37.637372   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:37.672811   71929 cri.go:89] found id: ""
	I0717 01:59:37.672839   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.672847   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:37.672852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:37.672909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:37.706864   71929 cri.go:89] found id: ""
	I0717 01:59:37.706904   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.706916   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:37.706923   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:37.706975   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:37.747539   71929 cri.go:89] found id: ""
	I0717 01:59:37.747567   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.747576   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:37.747581   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:37.747630   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:37.785229   71929 cri.go:89] found id: ""
	I0717 01:59:37.785251   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.785260   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:37.785268   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:37.785333   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:37.840428   71929 cri.go:89] found id: ""
	I0717 01:59:37.840460   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.840471   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:37.840477   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:37.840533   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:37.876888   71929 cri.go:89] found id: ""
	I0717 01:59:37.876916   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.876924   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:37.876932   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:37.876942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:37.926161   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:37.926197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:37.940857   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:37.940885   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:38.019210   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:38.019232   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:38.019245   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:38.112428   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:38.112471   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:40.657215   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:40.670824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:40.670900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:40.704008   71929 cri.go:89] found id: ""
	I0717 01:59:40.704030   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.704040   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:40.704048   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:40.704102   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:40.739544   71929 cri.go:89] found id: ""
	I0717 01:59:40.739576   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.739587   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:40.739595   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:40.739664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:40.773132   71929 cri.go:89] found id: ""
	I0717 01:59:40.773159   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.773169   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:40.773177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:40.773239   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:40.810162   71929 cri.go:89] found id: ""
	I0717 01:59:40.810183   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.810193   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:40.810200   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:40.810256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:40.844797   71929 cri.go:89] found id: ""
	I0717 01:59:40.844829   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.844840   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:40.844847   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:40.844918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:40.884444   71929 cri.go:89] found id: ""
	I0717 01:59:40.884468   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.884476   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:40.884482   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:40.884544   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:40.919413   71929 cri.go:89] found id: ""
	I0717 01:59:40.919437   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.919445   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:40.919451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:40.919505   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:40.961870   71929 cri.go:89] found id: ""
	I0717 01:59:40.961894   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.961902   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:40.961910   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:40.961921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:41.010600   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:41.010638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:41.025557   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:41.025589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:41.100100   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:41.100123   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:41.100135   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:41.185809   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:41.185850   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:39.802297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.802803   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.432998   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.433412   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:40.779796   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:42.781981   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.279014   71522 pod_ready.go:81] duration metric: took 4m0.006085275s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:59:43.279043   71522 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:59:43.279053   71522 pod_ready.go:38] duration metric: took 4m2.008175999s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:59:43.279073   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:59:43.279105   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.279162   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.327674   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:43.327725   71522 cri.go:89] found id: ""
	I0717 01:59:43.327734   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:43.327801   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.332247   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.332303   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.371598   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.371627   71522 cri.go:89] found id: ""
	I0717 01:59:43.371635   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:43.371683   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.377203   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.377265   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.416351   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.416374   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.416380   71522 cri.go:89] found id: ""
	I0717 01:59:43.416389   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:43.416448   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.420909   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.425228   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.425278   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.472117   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.472139   71522 cri.go:89] found id: ""
	I0717 01:59:43.472147   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:43.472194   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.476632   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.476698   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.517337   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:43.517360   71522 cri.go:89] found id: ""
	I0717 01:59:43.517369   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:43.517430   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.522437   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.522519   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.564511   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.564530   71522 cri.go:89] found id: ""
	I0717 01:59:43.564537   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:43.564595   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.570357   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.570440   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:43.615389   71522 cri.go:89] found id: ""
	I0717 01:59:43.615418   71522 logs.go:276] 0 containers: []
	W0717 01:59:43.615427   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:43.615433   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:43.615543   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:43.652739   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:43.652764   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:43.652769   71522 cri.go:89] found id: ""
	I0717 01:59:43.652777   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:43.652835   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.657323   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.661682   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:43.661702   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.714396   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:43.714434   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.761072   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:43.761110   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:43.825934   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:43.825963   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.871287   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:43.871316   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.907488   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:43.907517   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.949876   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:43.949903   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:44.093084   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:44.093116   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:44.153161   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:44.153206   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:44.197219   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:44.197249   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:44.242441   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:44.242478   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:44.288622   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.288646   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.839680   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.839712   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.854119   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:44.854145   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.725542   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:43.739304   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.739379   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.776754   71929 cri.go:89] found id: ""
	I0717 01:59:43.776783   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.776795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:43.776802   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.776863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.819729   71929 cri.go:89] found id: ""
	I0717 01:59:43.819756   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.819767   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:43.819774   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.819828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.860283   71929 cri.go:89] found id: ""
	I0717 01:59:43.860311   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.860322   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:43.860329   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.860391   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.898684   71929 cri.go:89] found id: ""
	I0717 01:59:43.898712   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.898722   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:43.898729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.898788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.942996   71929 cri.go:89] found id: ""
	I0717 01:59:43.943019   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.943026   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:43.943031   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.943077   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.981799   71929 cri.go:89] found id: ""
	I0717 01:59:43.981828   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.981839   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:43.981846   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.981903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:44.018222   71929 cri.go:89] found id: ""
	I0717 01:59:44.018252   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.018262   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:44.018267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:44.018326   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:44.056264   71929 cri.go:89] found id: ""
	I0717 01:59:44.056293   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.056304   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:44.056315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.056334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.172061   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:44.172108   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:44.219597   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:44.219627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:44.272299   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.272334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.287811   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:44.287848   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:44.379183   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:46.879529   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:46.893142   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:46.893207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:46.929073   71929 cri.go:89] found id: ""
	I0717 01:59:46.929101   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.929113   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:46.929121   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:46.929173   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:46.963697   71929 cri.go:89] found id: ""
	I0717 01:59:46.963725   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.963733   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:46.963739   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:46.963798   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.000697   71929 cri.go:89] found id: ""
	I0717 01:59:47.000730   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.000747   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:47.000752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.000804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.037270   71929 cri.go:89] found id: ""
	I0717 01:59:47.037304   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.037316   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:47.037323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.037382   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.072210   71929 cri.go:89] found id: ""
	I0717 01:59:47.072238   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.072249   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:47.072256   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.072321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.108404   71929 cri.go:89] found id: ""
	I0717 01:59:47.108432   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.108443   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:47.108451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.108535   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.146122   71929 cri.go:89] found id: ""
	I0717 01:59:47.146151   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.146162   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.146169   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:47.146225   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:47.187418   71929 cri.go:89] found id: ""
	I0717 01:59:47.187446   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.187455   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:47.187466   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:47.187481   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:47.201023   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:47.201053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:47.269851   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:47.269878   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.269894   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:47.356417   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:47.356456   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.803326   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:46.302939   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:45.433688   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.933271   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:49.934222   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.403005   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:47.420984   71522 api_server.go:72] duration metric: took 4m13.369710312s to wait for apiserver process to appear ...
	I0717 01:59:47.421011   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:59:47.421065   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:47.421128   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:47.465800   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:47.465830   71522 cri.go:89] found id: ""
	I0717 01:59:47.465838   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:47.465890   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.470561   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:47.470628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:47.513302   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:47.513321   71522 cri.go:89] found id: ""
	I0717 01:59:47.513328   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:47.513373   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.517668   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:47.517720   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.563466   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:47.563491   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:47.563495   71522 cri.go:89] found id: ""
	I0717 01:59:47.563502   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:47.563563   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.568058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.572381   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.572432   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.618919   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:47.618944   71522 cri.go:89] found id: ""
	I0717 01:59:47.618953   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:47.619014   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.623475   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.623525   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.662294   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:47.662321   71522 cri.go:89] found id: ""
	I0717 01:59:47.662329   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:47.662384   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.666740   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.666806   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.708962   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:47.708990   71522 cri.go:89] found id: ""
	I0717 01:59:47.708999   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:47.709058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.713551   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.713628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.750766   71522 cri.go:89] found id: ""
	I0717 01:59:47.750797   71522 logs.go:276] 0 containers: []
	W0717 01:59:47.750807   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.750814   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:47.750878   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:47.786664   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:47.786687   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:47.786692   71522 cri.go:89] found id: ""
	I0717 01:59:47.786699   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:47.786761   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.791460   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.795553   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.795576   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:48.298229   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:48.298271   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:48.313542   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:48.313573   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:48.429625   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:48.429663   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:48.475651   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:48.475677   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:48.514075   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:48.514101   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:48.550152   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:48.550182   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:48.592743   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:48.592771   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:48.652433   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:48.652464   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:48.699763   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:48.699796   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:48.737467   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:48.737504   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:48.788389   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:48.788425   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:48.842323   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:48.842357   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:48.900716   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:48.900746   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:47.397763   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:47.397791   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:49.954670   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:49.968840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:49.968898   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:50.003598   71929 cri.go:89] found id: ""
	I0717 01:59:50.003635   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.003646   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:50.003654   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:50.003714   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:50.040494   71929 cri.go:89] found id: ""
	I0717 01:59:50.040546   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.040558   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:50.040564   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:50.040624   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:50.074921   71929 cri.go:89] found id: ""
	I0717 01:59:50.074950   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.074959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:50.074965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:50.075015   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:50.117002   71929 cri.go:89] found id: ""
	I0717 01:59:50.117030   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.117041   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:50.117049   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:50.117106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:50.163026   71929 cri.go:89] found id: ""
	I0717 01:59:50.163052   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.163063   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:50.163071   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:50.163129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:50.197709   71929 cri.go:89] found id: ""
	I0717 01:59:50.197738   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.197749   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:50.197757   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:50.197838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:50.237776   71929 cri.go:89] found id: ""
	I0717 01:59:50.237808   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.237819   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:50.237827   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:50.237886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:50.275147   71929 cri.go:89] found id: ""
	I0717 01:59:50.275179   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.275189   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:50.275201   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:50.275215   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:50.329025   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:50.329057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:50.342745   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:50.342777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:50.417792   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:50.417817   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:50.417829   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:50.495288   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:50.495322   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:48.306102   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:50.804255   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:52.433248   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:54.931595   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:51.447495   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:59:51.452186   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:59:51.453112   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:59:51.453137   71522 api_server.go:131] duration metric: took 4.032118004s to wait for apiserver health ...
	I0717 01:59:51.453146   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:59:51.453170   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:51.453215   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:51.491272   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:51.491297   71522 cri.go:89] found id: ""
	I0717 01:59:51.491305   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:51.491365   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.495747   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:51.495795   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:51.538807   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:51.538830   71522 cri.go:89] found id: ""
	I0717 01:59:51.538838   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:51.538891   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.543454   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:51.543512   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:51.586258   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:51.586292   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:51.586296   71522 cri.go:89] found id: ""
	I0717 01:59:51.586306   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:51.586360   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.590446   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.594867   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:51.594936   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:51.636079   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:51.636101   71522 cri.go:89] found id: ""
	I0717 01:59:51.636108   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:51.636159   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.640225   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:51.640283   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:51.676395   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:51.676422   71522 cri.go:89] found id: ""
	I0717 01:59:51.676432   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:51.676496   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.680974   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:51.681043   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:51.720449   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.720476   71522 cri.go:89] found id: ""
	I0717 01:59:51.720483   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:51.720527   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.724704   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:51.724779   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:51.762892   71522 cri.go:89] found id: ""
	I0717 01:59:51.762923   71522 logs.go:276] 0 containers: []
	W0717 01:59:51.762932   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:51.762939   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:51.762986   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:51.803675   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.803702   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.803707   71522 cri.go:89] found id: ""
	I0717 01:59:51.803714   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:51.803807   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.808188   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.812046   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:51.812065   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:51.855800   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:51.855832   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.917804   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:51.917833   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.958797   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:51.958827   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.997003   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:51.997034   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:52.118345   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:52.118381   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:52.174308   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:52.174344   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:52.578823   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:52.578857   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:52.619962   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:52.619994   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:52.667564   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:52.667593   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:52.714716   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:52.714747   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:52.774123   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:52.774171   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:52.788399   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:52.788432   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:52.839796   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:52.839828   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:55.388404   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:59:55.388441   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.388448   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.388453   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.388458   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.388465   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.388469   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.388473   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.388484   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.388491   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.388499   71522 system_pods.go:74] duration metric: took 3.93534618s to wait for pod list to return data ...
	I0717 01:59:55.388509   71522 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:59:55.390798   71522 default_sa.go:45] found service account: "default"
	I0717 01:59:55.390829   71522 default_sa.go:55] duration metric: took 2.313714ms for default service account to be created ...
	I0717 01:59:55.390840   71522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:59:55.399028   71522 system_pods.go:86] 9 kube-system pods found
	I0717 01:59:55.399049   71522 system_pods.go:89] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.399054   71522 system_pods.go:89] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.399059   71522 system_pods.go:89] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.399063   71522 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.399068   71522 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.399072   71522 system_pods.go:89] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.399076   71522 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.399083   71522 system_pods.go:89] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.399090   71522 system_pods.go:89] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.399099   71522 system_pods.go:126] duration metric: took 8.253468ms to wait for k8s-apps to be running ...
	I0717 01:59:55.399108   71522 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:59:55.399152   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:59:55.417081   71522 system_svc.go:56] duration metric: took 17.965716ms WaitForService to wait for kubelet
	I0717 01:59:55.417109   71522 kubeadm.go:582] duration metric: took 4m21.36584166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:59:55.417130   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:59:55.420078   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:59:55.420099   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:59:55.420109   71522 node_conditions.go:105] duration metric: took 2.974324ms to run NodePressure ...
	I0717 01:59:55.420119   71522 start.go:241] waiting for startup goroutines ...
	I0717 01:59:55.420126   71522 start.go:246] waiting for cluster config update ...
	I0717 01:59:55.420136   71522 start.go:255] writing updated cluster config ...
	I0717 01:59:55.420406   71522 ssh_runner.go:195] Run: rm -f paused
	I0717 01:59:55.470793   71522 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:59:55.472960   71522 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-738184" cluster and "default" namespace by default
	I0717 01:59:53.036151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:53.049820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:53.049879   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:53.087144   71929 cri.go:89] found id: ""
	I0717 01:59:53.087175   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.087189   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:53.087195   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:53.087253   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:53.123135   71929 cri.go:89] found id: ""
	I0717 01:59:53.123164   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.123175   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:53.123191   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:53.123254   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:53.157887   71929 cri.go:89] found id: ""
	I0717 01:59:53.157912   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.157922   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:53.157927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:53.158004   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:53.201002   71929 cri.go:89] found id: ""
	I0717 01:59:53.201033   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.201045   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:53.201054   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:53.201115   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:53.236159   71929 cri.go:89] found id: ""
	I0717 01:59:53.236188   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.236198   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:53.236204   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:53.236258   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:53.277585   71929 cri.go:89] found id: ""
	I0717 01:59:53.277616   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.277627   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:53.277634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:53.277694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:53.322722   71929 cri.go:89] found id: ""
	I0717 01:59:53.322747   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.322758   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:53.322765   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:53.322824   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:53.364112   71929 cri.go:89] found id: ""
	I0717 01:59:53.364138   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.364149   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:53.364159   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:53.364172   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:53.418701   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:53.418739   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:53.435004   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:53.435030   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:53.511254   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:53.511274   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:53.511287   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:53.587967   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:53.588003   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:56.130773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:56.144742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:56.144811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:56.180267   71929 cri.go:89] found id: ""
	I0717 01:59:56.180295   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.180306   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:56.180313   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:56.180373   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:56.217223   71929 cri.go:89] found id: ""
	I0717 01:59:56.217252   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.217263   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:56.217269   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:56.217334   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:56.251714   71929 cri.go:89] found id: ""
	I0717 01:59:56.251738   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.251745   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:56.251752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:56.251805   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:56.292557   71929 cri.go:89] found id: ""
	I0717 01:59:56.292589   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.292597   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:56.292603   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:56.292653   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:56.332463   71929 cri.go:89] found id: ""
	I0717 01:59:56.332491   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.332501   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:56.332508   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:56.332562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:56.372155   71929 cri.go:89] found id: ""
	I0717 01:59:56.372180   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.372189   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:56.372197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:56.372255   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:56.415768   71929 cri.go:89] found id: ""
	I0717 01:59:56.415794   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.415806   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:56.415813   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:56.415871   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:56.456920   71929 cri.go:89] found id: ""
	I0717 01:59:56.456951   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.456959   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:56.456968   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:56.456978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:56.508932   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:56.508965   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:56.522496   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:56.522531   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:56.596839   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:56.596857   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:56.596870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:56.679237   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:56.679271   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:53.303565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:55.803725   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:57.806129   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:56.933245   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.432536   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.220084   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:59.233108   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:59.233182   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:59.266796   71929 cri.go:89] found id: ""
	I0717 01:59:59.266827   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.266838   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:59.266845   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:59.266909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:59.297992   71929 cri.go:89] found id: ""
	I0717 01:59:59.298017   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.298026   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:59.298032   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:59.298087   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:59.331953   71929 cri.go:89] found id: ""
	I0717 01:59:59.331982   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.331993   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:59.331999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:59.332069   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:59.368912   71929 cri.go:89] found id: ""
	I0717 01:59:59.368939   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.368948   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:59.368954   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:59.369002   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:59.402886   71929 cri.go:89] found id: ""
	I0717 01:59:59.402911   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.402920   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:59.402926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:59.402982   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:59.441227   71929 cri.go:89] found id: ""
	I0717 01:59:59.441249   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.441257   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:59.441263   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:59.441322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:59.479154   71929 cri.go:89] found id: ""
	I0717 01:59:59.479191   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.479213   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:59.479222   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:59.479286   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:59.516259   71929 cri.go:89] found id: ""
	I0717 01:59:59.516299   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.516309   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:59.516319   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:59.516332   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:59.596352   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:59.596385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:59.639712   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:59.639744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:59.691399   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:59.691444   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:59.706618   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:59.706648   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:59.778875   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.279246   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:02.293212   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:02.293284   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:02.330759   71929 cri.go:89] found id: ""
	I0717 02:00:02.330786   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.330795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:02.330800   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:02.330848   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:02.366257   71929 cri.go:89] found id: ""
	I0717 02:00:02.366287   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.366298   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:02.366305   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:02.366368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:00.303868   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.311063   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:01.432671   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:03.433059   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.404321   71929 cri.go:89] found id: ""
	I0717 02:00:02.404348   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.404358   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:02.404364   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:02.404432   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:02.444297   71929 cri.go:89] found id: ""
	I0717 02:00:02.444326   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.444342   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:02.444349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:02.444406   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:02.478433   71929 cri.go:89] found id: ""
	I0717 02:00:02.478466   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.478477   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:02.478483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:02.478530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:02.515519   71929 cri.go:89] found id: ""
	I0717 02:00:02.515551   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.515560   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:02.515566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:02.515618   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:02.551006   71929 cri.go:89] found id: ""
	I0717 02:00:02.551030   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.551038   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:02.551044   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:02.551110   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:02.588312   71929 cri.go:89] found id: ""
	I0717 02:00:02.588345   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.588356   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:02.588367   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:02.588381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:02.641900   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:02.641932   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:02.656851   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:02.656896   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:02.728286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.728315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:02.728327   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:02.806807   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:02.806847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.355196   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:05.369148   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:05.369231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:05.405012   71929 cri.go:89] found id: ""
	I0717 02:00:05.405045   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.405057   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:05.405068   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:05.405132   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:05.450524   71929 cri.go:89] found id: ""
	I0717 02:00:05.450564   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.450575   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:05.450582   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:05.450637   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:05.487503   71929 cri.go:89] found id: ""
	I0717 02:00:05.487533   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.487544   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:05.487553   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:05.487634   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:05.522607   71929 cri.go:89] found id: ""
	I0717 02:00:05.522635   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.522650   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:05.522656   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:05.522703   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:05.558091   71929 cri.go:89] found id: ""
	I0717 02:00:05.558120   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.558131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:05.558138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:05.558192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:05.594540   71929 cri.go:89] found id: ""
	I0717 02:00:05.594587   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.594598   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:05.594605   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:05.594668   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:05.631783   71929 cri.go:89] found id: ""
	I0717 02:00:05.631807   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.631818   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:05.631825   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:05.631886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:05.667494   71929 cri.go:89] found id: ""
	I0717 02:00:05.667523   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.667532   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:05.667543   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:05.667559   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:05.681348   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:05.681373   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:05.747143   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:05.747165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:05.747176   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:05.829639   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:05.829674   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.881984   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:05.882013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:04.803913   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.302068   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:05.434869   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.435174   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:09.931879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:08.435873   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:08.449840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:08.449901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:08.489613   71929 cri.go:89] found id: ""
	I0717 02:00:08.489663   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.489675   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:08.489684   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:08.489751   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:08.526604   71929 cri.go:89] found id: ""
	I0717 02:00:08.526635   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.526645   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:08.526660   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:08.526717   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:08.563202   71929 cri.go:89] found id: ""
	I0717 02:00:08.563227   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.563234   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:08.563240   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:08.563299   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:08.598336   71929 cri.go:89] found id: ""
	I0717 02:00:08.598365   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.598376   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:08.598383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:08.598441   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:08.632626   71929 cri.go:89] found id: ""
	I0717 02:00:08.632660   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.632671   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:08.632678   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:08.632739   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:08.667951   71929 cri.go:89] found id: ""
	I0717 02:00:08.667977   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.667993   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:08.668001   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:08.668059   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:08.702106   71929 cri.go:89] found id: ""
	I0717 02:00:08.702135   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.702146   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:08.702153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:08.702212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:08.733469   71929 cri.go:89] found id: ""
	I0717 02:00:08.733491   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.733499   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:08.733508   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:08.733518   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:08.787930   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:08.787966   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:08.802761   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:08.802795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:08.878115   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:08.878138   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:08.878149   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:08.962509   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:08.962543   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:11.503151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:11.518019   71929 kubeadm.go:597] duration metric: took 4m3.576613508s to restartPrimaryControlPlane
	W0717 02:00:11.518087   71929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:00:11.518113   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:00:11.970514   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:11.986794   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:00:11.997382   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:00:12.006789   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:00:12.006816   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:00:12.006867   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:00:12.015864   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:00:12.015921   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:00:12.025239   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:00:12.034315   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:00:12.034373   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:00:12.043533   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.052344   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:00:12.052393   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.061290   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:00:12.070311   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:00:12.070375   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:00:12.080404   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:00:12.318084   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:00:09.303502   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.803893   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.933539   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:14.433949   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:13.804007   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.303079   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.932416   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.932721   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.303306   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:20.306811   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:22.803374   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:21.433157   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:23.433283   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:24.805822   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.301985   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:25.931740   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.934346   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:29.302199   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:31.302607   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:30.433033   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:32.434743   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:34.933166   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:33.802140   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:35.803338   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:36.933672   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:39.432879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:38.302050   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:40.803322   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:41.932491   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:44.436201   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:43.302028   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:45.801979   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.303644   71146 pod_ready.go:81] duration metric: took 4m0.007411484s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 02:00:47.303668   71146 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 02:00:47.303678   71146 pod_ready.go:38] duration metric: took 4m7.053721739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:00:47.303694   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:00:47.303725   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:47.303791   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:47.365247   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:47.365272   71146 cri.go:89] found id: ""
	I0717 02:00:47.365279   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:47.365339   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.370201   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:47.370268   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:47.416627   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:47.416652   71146 cri.go:89] found id: ""
	I0717 02:00:47.416663   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:47.416731   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.421295   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:47.421454   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:47.463532   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.463556   71146 cri.go:89] found id: ""
	I0717 02:00:47.463564   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:47.463626   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.468291   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:47.468414   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:47.504328   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:47.504354   71146 cri.go:89] found id: ""
	I0717 02:00:47.504362   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:47.504445   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.508821   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:47.508880   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:47.550970   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.550996   71146 cri.go:89] found id: ""
	I0717 02:00:47.551006   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:47.551069   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.555974   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:47.556045   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:47.609884   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:47.609903   71146 cri.go:89] found id: ""
	I0717 02:00:47.609910   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:47.609968   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.615544   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:47.615603   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:47.653071   71146 cri.go:89] found id: ""
	I0717 02:00:47.653099   71146 logs.go:276] 0 containers: []
	W0717 02:00:47.653110   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:47.653117   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:47.653163   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:47.690462   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.690485   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:47.690490   71146 cri.go:89] found id: ""
	I0717 02:00:47.690498   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:47.690545   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.695196   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.699099   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:47.699117   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:47.816750   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:47.816782   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:46.932764   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:49.432402   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.869306   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:47.869341   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.906717   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:47.906755   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.944125   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:47.944152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.978632   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:47.978664   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:48.482628   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:48.482660   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:48.538252   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:48.538300   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:48.553011   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:48.553038   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:48.607632   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:48.607666   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:48.646122   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:48.646151   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:48.689948   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:48.689980   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:48.738285   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:48.738334   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.290996   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:51.308850   71146 api_server.go:72] duration metric: took 4m18.27461618s to wait for apiserver process to appear ...
	I0717 02:00:51.308873   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:00:51.308907   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:51.308967   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:51.350827   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.350857   71146 cri.go:89] found id: ""
	I0717 02:00:51.350866   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:51.350930   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.355308   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:51.355369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:51.393804   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.393831   71146 cri.go:89] found id: ""
	I0717 02:00:51.393840   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:51.393897   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.398144   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:51.398201   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:51.437974   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:51.437991   71146 cri.go:89] found id: ""
	I0717 02:00:51.437998   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:51.438044   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.442318   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:51.442382   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:51.478462   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.478481   71146 cri.go:89] found id: ""
	I0717 02:00:51.478489   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:51.478534   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.482624   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:51.482672   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:51.526089   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.526114   71146 cri.go:89] found id: ""
	I0717 02:00:51.526123   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:51.526170   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.530855   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:51.530923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:51.568875   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.568899   71146 cri.go:89] found id: ""
	I0717 02:00:51.568908   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:51.568972   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.573300   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:51.573369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:51.615775   71146 cri.go:89] found id: ""
	I0717 02:00:51.615800   71146 logs.go:276] 0 containers: []
	W0717 02:00:51.615809   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:51.615815   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:51.615876   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:51.658100   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:51.658124   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:51.658130   71146 cri.go:89] found id: ""
	I0717 02:00:51.658138   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:51.658183   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.663030   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.667348   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:51.667372   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.715502   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:51.715534   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.763431   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:51.763457   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.805523   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:51.805553   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.859660   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:51.859692   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:51.963831   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:51.963858   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:51.978152   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:51.978179   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:52.023897   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:52.023926   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:52.062193   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:52.062218   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:52.098487   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:52.098518   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:52.135733   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:52.135758   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:52.562245   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:52.562279   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:52.624258   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:52.624288   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:51.434060   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:53.933730   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:55.176270   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 02:00:55.180760   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 02:00:55.181928   71146 api_server.go:141] control plane version: v1.30.2
	I0717 02:00:55.181947   71146 api_server.go:131] duration metric: took 3.873068874s to wait for apiserver health ...
	I0717 02:00:55.181955   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:00:55.181975   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:55.182017   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:55.218028   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:55.218059   71146 cri.go:89] found id: ""
	I0717 02:00:55.218068   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:55.218125   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.222841   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:55.222911   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:55.265613   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.265638   71146 cri.go:89] found id: ""
	I0717 02:00:55.265647   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:55.265699   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.269866   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:55.269923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:55.306363   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:55.306390   71146 cri.go:89] found id: ""
	I0717 02:00:55.306400   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:55.306457   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.310843   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:55.310901   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:55.354417   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:55.354439   71146 cri.go:89] found id: ""
	I0717 02:00:55.354449   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:55.354503   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.358988   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:55.359038   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:55.396457   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.396480   71146 cri.go:89] found id: ""
	I0717 02:00:55.396488   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:55.396532   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.401185   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:55.401244   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:55.438249   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:55.438276   71146 cri.go:89] found id: ""
	I0717 02:00:55.438286   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:55.438344   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.442967   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:55.443048   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:55.484173   71146 cri.go:89] found id: ""
	I0717 02:00:55.484197   71146 logs.go:276] 0 containers: []
	W0717 02:00:55.484205   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:55.484210   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:55.484288   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:55.525757   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.525780   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.525784   71146 cri.go:89] found id: ""
	I0717 02:00:55.525790   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:55.525842   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.530253   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.534253   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:55.534275   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.578993   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:55.579018   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.622746   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:55.622771   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.660900   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:55.660931   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.709803   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:55.709833   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:56.092339   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:56.092390   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:56.130951   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:56.130976   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:56.186113   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:56.186152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:56.229794   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:56.229839   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:56.285798   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:56.285845   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:56.300391   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:56.300421   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:56.425621   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:56.425653   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:56.478853   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:56.478882   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:59.026000   71146 system_pods.go:59] 8 kube-system pods found
	I0717 02:00:59.026028   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.026033   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.026036   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.026039   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.026042   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.026045   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.026051   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.026054   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.026062   71146 system_pods.go:74] duration metric: took 3.844102201s to wait for pod list to return data ...
	I0717 02:00:59.026069   71146 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:00:59.028810   71146 default_sa.go:45] found service account: "default"
	I0717 02:00:59.028831   71146 default_sa.go:55] duration metric: took 2.756364ms for default service account to be created ...
	I0717 02:00:59.028838   71146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:00:59.036427   71146 system_pods.go:86] 8 kube-system pods found
	I0717 02:00:59.036457   71146 system_pods.go:89] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.036466   71146 system_pods.go:89] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.036474   71146 system_pods.go:89] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.036482   71146 system_pods.go:89] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.036489   71146 system_pods.go:89] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.036499   71146 system_pods.go:89] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.036509   71146 system_pods.go:89] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.036519   71146 system_pods.go:89] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.036532   71146 system_pods.go:126] duration metric: took 7.688074ms to wait for k8s-apps to be running ...
	I0717 02:00:59.036542   71146 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:00:59.036594   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:59.052023   71146 system_svc.go:56] duration metric: took 15.474441ms WaitForService to wait for kubelet
	I0717 02:00:59.052049   71146 kubeadm.go:582] duration metric: took 4m26.017816269s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:00:59.052073   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:00:59.054763   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:00:59.054784   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 02:00:59.054795   71146 node_conditions.go:105] duration metric: took 2.714349ms to run NodePressure ...
	I0717 02:00:59.054805   71146 start.go:241] waiting for startup goroutines ...
	I0717 02:00:59.054811   71146 start.go:246] waiting for cluster config update ...
	I0717 02:00:59.054824   71146 start.go:255] writing updated cluster config ...
	I0717 02:00:59.055069   71146 ssh_runner.go:195] Run: rm -f paused
	I0717 02:00:59.101243   71146 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 02:00:59.103341   71146 out.go:177] * Done! kubectl is now configured to use "embed-certs-940222" cluster and "default" namespace by default
	I0717 02:00:56.432853   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:58.433589   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:00.932978   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:02.933289   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:05.433003   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:07.433470   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:09.433795   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:11.933112   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:14.433274   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:16.932102   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:18.932904   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:20.933023   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:23.433153   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:24.926132   71603 pod_ready.go:81] duration metric: took 4m0.000155151s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	E0717 02:01:24.926165   71603 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 02:01:24.926185   71603 pod_ready.go:38] duration metric: took 4m39.916322674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:01:24.926214   71603 kubeadm.go:597] duration metric: took 5m27.432375382s to restartPrimaryControlPlane
	W0717 02:01:24.926303   71603 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:01:24.926339   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:01:51.790820   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.86445583s)
	I0717 02:01:51.790901   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:01:51.812968   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:01:51.835689   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:01:51.848832   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:01:51.848859   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 02:01:51.848911   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:01:51.876554   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:01:51.876620   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:01:51.891580   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:01:51.901279   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:01:51.901328   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:01:51.910994   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.920959   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:01:51.921020   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.931039   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:01:51.940496   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:01:51.940549   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:01:51.950455   71603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:01:51.999712   71603 kubeadm.go:310] W0717 02:01:51.966911    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.000573   71603 kubeadm.go:310] W0717 02:01:51.967749    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.132406   71603 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:02:01.065590   71603 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 02:02:01.065670   71603 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:01.065761   71603 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:01.065909   71603 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:01.066049   71603 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 02:02:01.066124   71603 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:01.067867   71603 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:01.067966   71603 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:01.068043   71603 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:01.068139   71603 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:01.068210   71603 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:01.068310   71603 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:01.068391   71603 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:01.068471   71603 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:01.068523   71603 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:01.068585   71603 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:01.068650   71603 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:01.068683   71603 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:01.068752   71603 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:01.068822   71603 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:01.068906   71603 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 02:02:01.068970   71603 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:01.069057   71603 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:01.069157   71603 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:01.069271   71603 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:01.069369   71603 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:01.070772   71603 out.go:204]   - Booting up control plane ...
	I0717 02:02:01.070883   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:01.070981   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:01.071088   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:01.071206   71603 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:01.071311   71603 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:01.071365   71603 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:01.071497   71603 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 02:02:01.071557   71603 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 02:02:01.071608   71603 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044041ms
	I0717 02:02:01.071663   71603 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 02:02:01.071725   71603 kubeadm.go:310] [api-check] The API server is healthy after 5.501034024s
	I0717 02:02:01.071823   71603 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 02:02:01.071926   71603 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 02:02:01.071975   71603 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 02:02:01.072168   71603 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-391501 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 02:02:01.072238   71603 kubeadm.go:310] [bootstrap-token] Using token: jhnlja.0tmcz1ce1lkie6op
	I0717 02:02:01.073965   71603 out.go:204]   - Configuring RBAC rules ...
	I0717 02:02:01.074091   71603 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 02:02:01.074223   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 02:02:01.074390   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 02:02:01.074597   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 02:02:01.074766   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 02:02:01.074887   71603 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 02:02:01.075058   71603 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 02:02:01.075126   71603 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 02:02:01.075195   71603 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 02:02:01.075204   71603 kubeadm.go:310] 
	I0717 02:02:01.075255   71603 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 02:02:01.075262   71603 kubeadm.go:310] 
	I0717 02:02:01.075372   71603 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 02:02:01.075386   71603 kubeadm.go:310] 
	I0717 02:02:01.075419   71603 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 02:02:01.075498   71603 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 02:02:01.075582   71603 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 02:02:01.075604   71603 kubeadm.go:310] 
	I0717 02:02:01.075687   71603 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 02:02:01.075697   71603 kubeadm.go:310] 
	I0717 02:02:01.075759   71603 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 02:02:01.075771   71603 kubeadm.go:310] 
	I0717 02:02:01.075834   71603 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 02:02:01.075936   71603 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 02:02:01.076034   71603 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 02:02:01.076043   71603 kubeadm.go:310] 
	I0717 02:02:01.076142   71603 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 02:02:01.076248   71603 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 02:02:01.076256   71603 kubeadm.go:310] 
	I0717 02:02:01.076379   71603 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076541   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 \
	I0717 02:02:01.076582   71603 kubeadm.go:310] 	--control-plane 
	I0717 02:02:01.076600   71603 kubeadm.go:310] 
	I0717 02:02:01.076708   71603 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 02:02:01.076719   71603 kubeadm.go:310] 
	I0717 02:02:01.076819   71603 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076955   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 
	I0717 02:02:01.076972   71603 cni.go:84] Creating CNI manager for ""
	I0717 02:02:01.076981   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 02:02:01.078801   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 02:02:01.080151   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 02:02:01.093210   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 02:02:01.116656   71603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 02:02:01.116712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.116756   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-391501 minikube.k8s.io/updated_at=2024_07_17T02_02_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=no-preload-391501 minikube.k8s.io/primary=true
	I0717 02:02:01.314407   71603 ops.go:34] apiserver oom_adj: -16
	I0717 02:02:01.314467   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.814693   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.315439   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.814676   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.314734   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.814702   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.315450   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.815112   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.315144   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.814712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.921356   71603 kubeadm.go:1113] duration metric: took 4.80469441s to wait for elevateKubeSystemPrivileges
	I0717 02:02:05.921398   71603 kubeadm.go:394] duration metric: took 6m8.48278775s to StartCluster
	I0717 02:02:05.921420   71603 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.921508   71603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 02:02:05.923844   71603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.924156   71603 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 02:02:05.924254   71603 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 02:02:05.924328   71603 addons.go:69] Setting storage-provisioner=true in profile "no-preload-391501"
	I0717 02:02:05.924357   71603 addons.go:234] Setting addon storage-provisioner=true in "no-preload-391501"
	I0717 02:02:05.924355   71603 addons.go:69] Setting default-storageclass=true in profile "no-preload-391501"
	I0717 02:02:05.924364   71603 addons.go:69] Setting metrics-server=true in profile "no-preload-391501"
	I0717 02:02:05.924391   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:02:05.924398   71603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-391501"
	I0717 02:02:05.924404   71603 addons.go:234] Setting addon metrics-server=true in "no-preload-391501"
	W0717 02:02:05.924414   71603 addons.go:243] addon metrics-server should already be in state true
	W0717 02:02:05.924368   71603 addons.go:243] addon storage-provisioner should already be in state true
	I0717 02:02:05.924447   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924460   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924801   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924827   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924834   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924850   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924874   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924912   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.926034   71603 out.go:177] * Verifying Kubernetes components...
	I0717 02:02:05.927316   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:02:05.941502   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0717 02:02:05.941716   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0717 02:02:05.941969   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942299   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942492   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942509   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942873   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942902   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942933   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943175   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.943250   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943555   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0717 02:02:05.943829   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.943862   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.943996   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.944648   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.944672   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.945037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.945577   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.945613   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.947058   71603 addons.go:234] Setting addon default-storageclass=true in "no-preload-391501"
	W0717 02:02:05.947076   71603 addons.go:243] addon default-storageclass should already be in state true
	I0717 02:02:05.947103   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.947419   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.947447   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.960183   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0717 02:02:05.960662   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.961220   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.961249   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.961532   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.961777   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.962531   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0717 02:02:05.963063   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.964115   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.964120   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.964146   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.965195   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.965777   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0717 02:02:05.965802   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.965845   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.966114   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.966615   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.966635   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.966706   71603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 02:02:05.967037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.967228   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.968069   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 02:02:05.968101   71603 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 02:02:05.968121   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.969421   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.971055   71603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 02:02:05.972019   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.972494   71603 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:05.972515   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 02:02:05.972533   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.972622   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.972646   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.973122   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.973289   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.973415   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.973638   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.975702   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976091   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.976110   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976376   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.976553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.976717   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.976866   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.983061   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44967
	I0717 02:02:05.983397   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.983851   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.983867   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.984150   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.984319   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.985757   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.985973   71603 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:05.985985   71603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 02:02:05.986000   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.989238   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989627   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.989647   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989890   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.990056   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.990212   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.990412   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:06.238449   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 02:02:06.272217   71603 node_ready.go:35] waiting up to 6m0s for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281012   71603 node_ready.go:49] node "no-preload-391501" has status "Ready":"True"
	I0717 02:02:06.281031   71603 node_ready.go:38] duration metric: took 8.787329ms for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281040   71603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:06.297250   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:06.386971   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 02:02:06.386995   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 02:02:06.439822   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:06.460362   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 02:02:06.460391   71603 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 02:02:06.468640   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:06.551454   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:06.551482   71603 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 02:02:06.727518   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:07.338701   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338778   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.338874   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338900   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339119   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339217   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339230   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339273   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339291   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339301   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339314   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339240   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339386   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339575   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339592   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339648   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339711   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339736   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.357948   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.357966   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.358197   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.358212   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.694612   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.694690   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695028   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.695109   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695122   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695148   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.695160   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695404   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695421   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695432   71603 addons.go:475] Verifying addon metrics-server=true in "no-preload-391501"
	I0717 02:02:07.698298   71603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 02:02:08.622411   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:02:08.622531   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:02:08.624111   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:08.624168   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:08.624265   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:08.624391   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:08.624526   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:08.624604   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:08.626394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:08.626478   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:08.626574   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:08.626657   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:08.626735   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:08.626830   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:08.626909   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:08.627001   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:08.627095   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:08.627203   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:08.627325   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:08.627392   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:08.627469   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:08.627573   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:08.627663   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:08.627753   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:08.627836   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:08.627997   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:08.628107   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:08.628179   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:08.628272   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:08.630262   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:08.630372   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:08.630477   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:08.630594   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:08.630729   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:08.630960   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:08.631020   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:08.631099   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631293   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631394   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631648   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631748   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631925   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632050   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632253   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632327   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632528   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632546   71929 kubeadm.go:310] 
	I0717 02:02:08.632611   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:02:08.632671   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:02:08.632689   71929 kubeadm.go:310] 
	I0717 02:02:08.632729   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:02:08.632772   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:02:08.632902   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:02:08.632914   71929 kubeadm.go:310] 
	I0717 02:02:08.633001   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:02:08.633030   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:02:08.633075   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:02:08.633092   71929 kubeadm.go:310] 
	I0717 02:02:08.633204   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:02:08.633281   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:02:08.633306   71929 kubeadm.go:310] 
	I0717 02:02:08.633450   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:02:08.633535   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:02:08.633597   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:02:08.633668   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:02:08.633697   71929 kubeadm.go:310] 
	W0717 02:02:08.633780   71929 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 02:02:08.633821   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:02:09.101394   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:09.119918   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:02:09.130974   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:02:09.131002   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:02:09.131046   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:02:09.142720   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:02:09.142790   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:02:09.154990   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:02:09.166317   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:02:09.166379   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:02:09.176756   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.186639   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:02:09.186697   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.196778   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:02:09.206420   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:02:09.206469   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:02:09.216325   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:02:09.293311   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:09.293457   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:09.442386   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:09.442594   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:09.442736   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:09.618387   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:07.699645   71603 addons.go:510] duration metric: took 1.775390854s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 02:02:08.305410   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:09.620394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:09.620496   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:09.620593   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:09.620691   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:09.620791   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:09.620909   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:09.621004   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:09.621117   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:09.621364   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:09.621778   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:09.622072   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:09.622135   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:09.622225   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:09.990964   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:10.434990   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:10.579785   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:10.723319   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:10.746923   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:10.748370   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:10.748460   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:10.888855   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:10.890727   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:10.890860   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:10.893530   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:10.894934   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:10.896825   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:10.899127   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:10.806868   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:12.804727   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.804754   71603 pod_ready.go:81] duration metric: took 6.507471417s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.804763   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812383   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.812408   71603 pod_ready.go:81] duration metric: took 7.638012ms for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812420   71603 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320241   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.320263   71603 pod_ready.go:81] duration metric: took 507.836128ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320285   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326308   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.326332   71603 pod_ready.go:81] duration metric: took 6.041207ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326341   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331310   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.331338   71603 pod_ready.go:81] duration metric: took 4.988207ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331360   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602634   71603 pod_ready.go:92] pod "kube-proxy-gl7th" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.602677   71603 pod_ready.go:81] duration metric: took 271.310877ms for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602687   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002256   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:14.002282   71603 pod_ready.go:81] duration metric: took 399.588324ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002290   71603 pod_ready.go:38] duration metric: took 7.721240931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:14.002306   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:02:14.002355   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:02:14.016981   71603 api_server.go:72] duration metric: took 8.092789001s to wait for apiserver process to appear ...
	I0717 02:02:14.017007   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:02:14.017026   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 02:02:14.022008   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 02:02:14.022992   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 02:02:14.023010   71603 api_server.go:131] duration metric: took 5.997297ms to wait for apiserver health ...
	I0717 02:02:14.023016   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:02:14.204777   71603 system_pods.go:59] 9 kube-system pods found
	I0717 02:02:14.204806   71603 system_pods.go:61] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.204811   71603 system_pods.go:61] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.204816   71603 system_pods.go:61] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.204819   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.204823   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.204826   71603 system_pods.go:61] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.204829   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.204836   71603 system_pods.go:61] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.204839   71603 system_pods.go:61] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.204847   71603 system_pods.go:74] duration metric: took 181.825073ms to wait for pod list to return data ...
	I0717 02:02:14.204854   71603 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:02:14.402964   71603 default_sa.go:45] found service account: "default"
	I0717 02:02:14.402992   71603 default_sa.go:55] duration metric: took 198.131224ms for default service account to be created ...
	I0717 02:02:14.403005   71603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:02:14.606371   71603 system_pods.go:86] 9 kube-system pods found
	I0717 02:02:14.606408   71603 system_pods.go:89] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.606418   71603 system_pods.go:89] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.606424   71603 system_pods.go:89] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.606430   71603 system_pods.go:89] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.606438   71603 system_pods.go:89] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.606444   71603 system_pods.go:89] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.606450   71603 system_pods.go:89] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.606461   71603 system_pods.go:89] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.606474   71603 system_pods.go:89] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.606486   71603 system_pods.go:126] duration metric: took 203.473728ms to wait for k8s-apps to be running ...
	I0717 02:02:14.606497   71603 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:02:14.606568   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:14.622178   71603 system_svc.go:56] duration metric: took 15.671962ms WaitForService to wait for kubelet
	I0717 02:02:14.622211   71603 kubeadm.go:582] duration metric: took 8.698021688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:02:14.622234   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:02:14.802282   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:02:14.802309   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 02:02:14.802319   71603 node_conditions.go:105] duration metric: took 180.080727ms to run NodePressure ...
	I0717 02:02:14.802330   71603 start.go:241] waiting for startup goroutines ...
	I0717 02:02:14.802337   71603 start.go:246] waiting for cluster config update ...
	I0717 02:02:14.802345   71603 start.go:255] writing updated cluster config ...
	I0717 02:02:14.802613   71603 ssh_runner.go:195] Run: rm -f paused
	I0717 02:02:14.848725   71603 start.go:600] kubectl: 1.30.2, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 02:02:14.850965   71603 out.go:177] * Done! kubectl is now configured to use "no-preload-391501" cluster and "default" namespace by default
	I0717 02:02:50.900829   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:50.901350   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:50.901626   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:55.902558   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:55.902805   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:05.903753   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:05.904033   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:25.905383   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:25.905597   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906576   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:04:05.906960   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906992   71929 kubeadm.go:310] 
	I0717 02:04:05.907049   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:04:05.907133   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:04:05.907182   71929 kubeadm.go:310] 
	I0717 02:04:05.907252   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:04:05.907339   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:04:05.907516   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:04:05.907529   71929 kubeadm.go:310] 
	I0717 02:04:05.907661   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:04:05.907699   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:04:05.907743   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:04:05.907751   71929 kubeadm.go:310] 
	I0717 02:04:05.907907   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:04:05.908043   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:04:05.908053   71929 kubeadm.go:310] 
	I0717 02:04:05.908221   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:04:05.908435   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:04:05.908619   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:04:05.908738   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:04:05.908788   71929 kubeadm.go:310] 
	I0717 02:04:05.909079   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:04:05.909286   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:04:05.909452   71929 kubeadm.go:394] duration metric: took 7m58.01930975s to StartCluster
	I0717 02:04:05.909455   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:04:05.909494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:04:05.909552   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:04:05.952911   71929 cri.go:89] found id: ""
	I0717 02:04:05.952937   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.952949   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:04:05.952957   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:04:05.953026   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:04:05.988490   71929 cri.go:89] found id: ""
	I0717 02:04:05.988518   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.988529   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:04:05.988537   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:04:05.988593   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:04:06.025228   71929 cri.go:89] found id: ""
	I0717 02:04:06.025259   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.025269   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:04:06.025277   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:04:06.025342   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:04:06.060563   71929 cri.go:89] found id: ""
	I0717 02:04:06.060589   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.060599   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:04:06.060604   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:04:06.060660   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:04:06.095051   71929 cri.go:89] found id: ""
	I0717 02:04:06.095079   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.095091   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:04:06.095099   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:04:06.095150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:04:06.131892   71929 cri.go:89] found id: ""
	I0717 02:04:06.131914   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.131921   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:04:06.131927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:04:06.131973   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:04:06.168893   71929 cri.go:89] found id: ""
	I0717 02:04:06.168919   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.168930   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:04:06.168937   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:04:06.168995   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:04:06.206635   71929 cri.go:89] found id: ""
	I0717 02:04:06.206658   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.206668   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:04:06.206679   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:04:06.206693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:04:06.308601   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:04:06.308624   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:04:06.308637   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:04:06.422081   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:04:06.422116   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:04:06.467466   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:04:06.467496   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:04:06.521420   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:04:06.521457   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0717 02:04:06.535167   71929 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 02:04:06.535211   71929 out.go:239] * 
	W0717 02:04:06.535263   71929 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.535292   71929 out.go:239] * 
	W0717 02:04:06.536098   71929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:04:06.539314   71929 out.go:177] 
	W0717 02:04:06.540504   71929 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.540557   71929 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 02:04:06.540579   71929 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 02:04:06.541888   71929 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.498714663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182137498686165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e34ba707-efb1-418b-9f51-8930a85d808f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.499443353Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=233412e2-0a88-47a2-af59-eaf26fdf8425 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.499500191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=233412e2-0a88-47a2-af59-eaf26fdf8425 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.499712728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181362216896995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0be2dc67deee2847d02f59a5746918c97701700b6b27134a02a269cac1586bbf,PodSandboxId:d06c9b928557d4f3a4ca039be6b890e245c00b01b020554341b42c8239092606,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181344267252116,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 593e2c6d-7dfd-4341-8cd6-a6555c12c9bb,},Annotations:map[string]string{io.kubernetes.container.hash: d8259a70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7,PodSandboxId:59bd5ed033be981dc6c17d90ad14d07ac1cc3e31305111865bf170d7fe9a8ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339228693585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9w26c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 530f4d52-5fdc-47c4-8919-44430bf71e05,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd2b4ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013,PodSandboxId:6136ee902a2ec034a8aa4e7d8a8de84dfd8d0b1a2028d50f0efa468da89169c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339152224261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-js7sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3951c5
-d98d-4221-b71c-fc4f548b31d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1a00847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a,PodSandboxId:e32a832bea1dbd830204601544d55df47444e723a004aff84dddf1a3c6d36bb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172118
1331378855399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4n94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97eee4e8-4f36-412f-9064-57515ab6e932,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc42ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181331344560383,Labels:map[
string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3,PodSandboxId:428c5ac8a796afe703f57aa8f82b79783581de24e6b7d887b059fe7a9f899b4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181327691830390,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39962f541570a252c45496cd3715709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9,PodSandboxId:81b79012396cb1c0be9793763c4d7e2ed7856b09af28b1973e3a50079138b7e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181327626095989,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3649afd50bb96c296085b2238c924507,},Annotations:map[string]string{io.kubernetes.container.hash: 912f1836,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82,PodSandboxId:8f45848e9ebeec988c932147cd63dd1dd530ad5cb1b5124794a956075fac8995,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181327660667355,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 932f794983dfeb2dd7ccb21ae9543905,},Annotations:map[string]string{io.kubernetes.container.hash: ac850e24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8,PodSandboxId:a4cd441196dd3b633680466f7e7129bc15786c47db97d1de90a11a73f0582b8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181327637093645,Labels:map[string]string{io.kubernete
s.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc195369d4468cfffebee038dc12bf0e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=233412e2-0a88-47a2-af59-eaf26fdf8425 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.537775382Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ad6e3f8-7d44-4d1f-851e-6d60b0dd89ce name=/runtime.v1.RuntimeService/Version
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.537864378Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ad6e3f8-7d44-4d1f-851e-6d60b0dd89ce name=/runtime.v1.RuntimeService/Version
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.539205215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efcf2162-b0c9-4dbf-9e48-90134131e220 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.540300287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182137540221059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efcf2162-b0c9-4dbf-9e48-90134131e220 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.540901678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f02194f-575b-4894-9e8e-598d1572dd01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.540974522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f02194f-575b-4894-9e8e-598d1572dd01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.541313546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181362216896995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0be2dc67deee2847d02f59a5746918c97701700b6b27134a02a269cac1586bbf,PodSandboxId:d06c9b928557d4f3a4ca039be6b890e245c00b01b020554341b42c8239092606,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181344267252116,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 593e2c6d-7dfd-4341-8cd6-a6555c12c9bb,},Annotations:map[string]string{io.kubernetes.container.hash: d8259a70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7,PodSandboxId:59bd5ed033be981dc6c17d90ad14d07ac1cc3e31305111865bf170d7fe9a8ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339228693585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9w26c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 530f4d52-5fdc-47c4-8919-44430bf71e05,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd2b4ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013,PodSandboxId:6136ee902a2ec034a8aa4e7d8a8de84dfd8d0b1a2028d50f0efa468da89169c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339152224261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-js7sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3951c5
-d98d-4221-b71c-fc4f548b31d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1a00847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a,PodSandboxId:e32a832bea1dbd830204601544d55df47444e723a004aff84dddf1a3c6d36bb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172118
1331378855399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4n94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97eee4e8-4f36-412f-9064-57515ab6e932,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc42ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181331344560383,Labels:map[
string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3,PodSandboxId:428c5ac8a796afe703f57aa8f82b79783581de24e6b7d887b059fe7a9f899b4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181327691830390,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39962f541570a252c45496cd3715709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9,PodSandboxId:81b79012396cb1c0be9793763c4d7e2ed7856b09af28b1973e3a50079138b7e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181327626095989,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3649afd50bb96c296085b2238c924507,},Annotations:map[string]string{io.kubernetes.container.hash: 912f1836,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82,PodSandboxId:8f45848e9ebeec988c932147cd63dd1dd530ad5cb1b5124794a956075fac8995,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181327660667355,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 932f794983dfeb2dd7ccb21ae9543905,},Annotations:map[string]string{io.kubernetes.container.hash: ac850e24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8,PodSandboxId:a4cd441196dd3b633680466f7e7129bc15786c47db97d1de90a11a73f0582b8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181327637093645,Labels:map[string]string{io.kubernete
s.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc195369d4468cfffebee038dc12bf0e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f02194f-575b-4894-9e8e-598d1572dd01 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.578725655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5386469e-69a4-4ac4-8827-c00ef145883e name=/runtime.v1.RuntimeService/Version
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.578819505Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5386469e-69a4-4ac4-8827-c00ef145883e name=/runtime.v1.RuntimeService/Version
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.580063273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d98218b0-bc67-41dd-a3e2-a5e70f490bfa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.580608285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182137580580766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d98218b0-bc67-41dd-a3e2-a5e70f490bfa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.581224249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c72752d-5568-4b05-838c-5f6ccf028e4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.581281805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c72752d-5568-4b05-838c-5f6ccf028e4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.581541272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181362216896995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0be2dc67deee2847d02f59a5746918c97701700b6b27134a02a269cac1586bbf,PodSandboxId:d06c9b928557d4f3a4ca039be6b890e245c00b01b020554341b42c8239092606,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181344267252116,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 593e2c6d-7dfd-4341-8cd6-a6555c12c9bb,},Annotations:map[string]string{io.kubernetes.container.hash: d8259a70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7,PodSandboxId:59bd5ed033be981dc6c17d90ad14d07ac1cc3e31305111865bf170d7fe9a8ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339228693585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9w26c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 530f4d52-5fdc-47c4-8919-44430bf71e05,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd2b4ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013,PodSandboxId:6136ee902a2ec034a8aa4e7d8a8de84dfd8d0b1a2028d50f0efa468da89169c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339152224261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-js7sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3951c5
-d98d-4221-b71c-fc4f548b31d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1a00847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a,PodSandboxId:e32a832bea1dbd830204601544d55df47444e723a004aff84dddf1a3c6d36bb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172118
1331378855399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4n94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97eee4e8-4f36-412f-9064-57515ab6e932,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc42ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181331344560383,Labels:map[
string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3,PodSandboxId:428c5ac8a796afe703f57aa8f82b79783581de24e6b7d887b059fe7a9f899b4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181327691830390,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39962f541570a252c45496cd3715709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9,PodSandboxId:81b79012396cb1c0be9793763c4d7e2ed7856b09af28b1973e3a50079138b7e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181327626095989,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3649afd50bb96c296085b2238c924507,},Annotations:map[string]string{io.kubernetes.container.hash: 912f1836,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82,PodSandboxId:8f45848e9ebeec988c932147cd63dd1dd530ad5cb1b5124794a956075fac8995,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181327660667355,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 932f794983dfeb2dd7ccb21ae9543905,},Annotations:map[string]string{io.kubernetes.container.hash: ac850e24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8,PodSandboxId:a4cd441196dd3b633680466f7e7129bc15786c47db97d1de90a11a73f0582b8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181327637093645,Labels:map[string]string{io.kubernete
s.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc195369d4468cfffebee038dc12bf0e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c72752d-5568-4b05-838c-5f6ccf028e4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.613783135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43092aea-5f7c-49eb-8057-8d54991a242e name=/runtime.v1.RuntimeService/Version
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.613882144Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43092aea-5f7c-49eb-8057-8d54991a242e name=/runtime.v1.RuntimeService/Version
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.615916759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=137405cf-8531-47d0-af13-470fa0a5e594 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.616439868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182137616345733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=137405cf-8531-47d0-af13-470fa0a5e594 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.619560300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd518102-d55d-48ec-9824-83a49b4a4ef8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.619615278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd518102-d55d-48ec-9824-83a49b4a4ef8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:08:57 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:08:57.619795830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181362216896995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0be2dc67deee2847d02f59a5746918c97701700b6b27134a02a269cac1586bbf,PodSandboxId:d06c9b928557d4f3a4ca039be6b890e245c00b01b020554341b42c8239092606,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181344267252116,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 593e2c6d-7dfd-4341-8cd6-a6555c12c9bb,},Annotations:map[string]string{io.kubernetes.container.hash: d8259a70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7,PodSandboxId:59bd5ed033be981dc6c17d90ad14d07ac1cc3e31305111865bf170d7fe9a8ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339228693585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9w26c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 530f4d52-5fdc-47c4-8919-44430bf71e05,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd2b4ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013,PodSandboxId:6136ee902a2ec034a8aa4e7d8a8de84dfd8d0b1a2028d50f0efa468da89169c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339152224261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-js7sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3951c5
-d98d-4221-b71c-fc4f548b31d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1a00847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a,PodSandboxId:e32a832bea1dbd830204601544d55df47444e723a004aff84dddf1a3c6d36bb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172118
1331378855399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4n94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97eee4e8-4f36-412f-9064-57515ab6e932,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc42ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181331344560383,Labels:map[
string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3,PodSandboxId:428c5ac8a796afe703f57aa8f82b79783581de24e6b7d887b059fe7a9f899b4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181327691830390,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39962f541570a252c45496cd3715709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9,PodSandboxId:81b79012396cb1c0be9793763c4d7e2ed7856b09af28b1973e3a50079138b7e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181327626095989,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3649afd50bb96c296085b2238c924507,},Annotations:map[string]string{io.kubernetes.container.hash: 912f1836,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82,PodSandboxId:8f45848e9ebeec988c932147cd63dd1dd530ad5cb1b5124794a956075fac8995,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181327660667355,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 932f794983dfeb2dd7ccb21ae9543905,},Annotations:map[string]string{io.kubernetes.container.hash: ac850e24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8,PodSandboxId:a4cd441196dd3b633680466f7e7129bc15786c47db97d1de90a11a73f0582b8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181327637093645,Labels:map[string]string{io.kubernete
s.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc195369d4468cfffebee038dc12bf0e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd518102-d55d-48ec-9824-83a49b4a4ef8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e7c80efcec351       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   0c52e1c863ab7       storage-provisioner
	0be2dc67deee2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   d06c9b928557d       busybox
	92644b17d028a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   59bd5ed033be9       coredns-7db6d8ff4d-9w26c
	4d44ae996265f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   6136ee902a2ec       coredns-7db6d8ff4d-js7sn
	6945ab02cbf2a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                1                   e32a832bea1db       kube-proxy-c4n94
	abd3156233dd7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   0c52e1c863ab7       storage-provisioner
	e6b826ba73561       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      13 minutes ago      Running             kube-controller-manager   1                   428c5ac8a796a       kube-controller-manager-default-k8s-diff-port-738184
	3d43ec5825cbc       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      13 minutes ago      Running             kube-apiserver            1                   8f45848e9ebee       kube-apiserver-default-k8s-diff-port-738184
	1a749b1143a7a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago      Running             kube-scheduler            1                   a4cd441196dd3       kube-scheduler-default-k8s-diff-port-738184
	5430044adf294       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   81b79012396cb       etcd-default-k8s-diff-port-738184
	
	
	==> coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39032 - 9353 "HINFO IN 4281169462580780465.3513493968747018561. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007734339s
	
	
	==> coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53035 - 43803 "HINFO IN 5403295143789589699.7859562178537526355. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009121686s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-738184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-738184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=default-k8s-diff-port-738184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_48_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:48:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-738184
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:08:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:06:13 +0000   Wed, 17 Jul 2024 01:47:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:06:13 +0000   Wed, 17 Jul 2024 01:47:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:06:13 +0000   Wed, 17 Jul 2024 01:47:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:06:13 +0000   Wed, 17 Jul 2024 01:55:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    default-k8s-diff-port-738184
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55e69da725794cb286fe7c1138b473a3
	  System UUID:                55e69da7-2579-4cb2-86fe-7c1138b473a3
	  Boot ID:                    2a8dc260-2c7c-4ff1-bdbd-266033bdf9b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-9w26c                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 coredns-7db6d8ff4d-js7sn                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-738184                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-738184             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-738184    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-c4n94                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-738184             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-569cc877fc-gcjkt                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-738184 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-738184 event: Registered Node default-k8s-diff-port-738184 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-738184 event: Registered Node default-k8s-diff-port-738184 in Controller
	
	
	==> dmesg <==
	[Jul17 01:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051153] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039974] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.514788] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.344055] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.571382] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.378452] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.061314] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063869] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.185618] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +0.148120] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +0.323272] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +4.440004] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.058020] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.021000] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +4.580152] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.841397] systemd-fstab-generator[1586]: Ignoring "noauto" option for root device
	[  +2.886818] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.888100] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] <==
	{"level":"info","ts":"2024-07-17T01:56:25.701084Z","caller":"traceutil/trace.go:171","msg":"trace[703464827] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86; range_end:; response_count:1; response_revision:605; }","duration":"714.036449ms","start":"2024-07-17T01:56:24.987034Z","end":"2024-07-17T01:56:25.70107Z","steps":["trace[703464827] 'agreement among raft nodes before linearized reading'  (duration: 712.962976ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:56:25.701128Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:24.98703Z","time spent":"714.084633ms","remote":"127.0.0.1:51456","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":870,"request content":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" "}
	{"level":"warn","ts":"2024-07-17T01:56:25.700244Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"713.29361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt\" ","response":"range_response_count:1 size:4249"}
	{"level":"info","ts":"2024-07-17T01:56:25.701326Z","caller":"traceutil/trace.go:171","msg":"trace[795043851] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt; range_end:; response_count:1; response_revision:605; }","duration":"714.388803ms","start":"2024-07-17T01:56:24.986923Z","end":"2024-07-17T01:56:25.701312Z","steps":["trace[795043851] 'agreement among raft nodes before linearized reading'  (duration: 713.245344ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:56:25.70145Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:24.986905Z","time spent":"714.532963ms","remote":"127.0.0.1:51546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4271,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt\" "}
	{"level":"warn","ts":"2024-07-17T01:56:25.700269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"291.948975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:56:25.70159Z","caller":"traceutil/trace.go:171","msg":"trace[78793458] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:605; }","duration":"293.284273ms","start":"2024-07-17T01:56:25.408295Z","end":"2024-07-17T01:56:25.701579Z","steps":["trace[78793458] 'agreement among raft nodes before linearized reading'  (duration: 291.961161ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:56:25.700317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"434.616045ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt\" ","response":"range_response_count:1 size:4249"}
	{"level":"info","ts":"2024-07-17T01:56:25.701754Z","caller":"traceutil/trace.go:171","msg":"trace[153314986] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt; range_end:; response_count:1; response_revision:605; }","duration":"436.076532ms","start":"2024-07-17T01:56:25.265668Z","end":"2024-07-17T01:56:25.701745Z","steps":["trace[153314986] 'agreement among raft nodes before linearized reading'  (duration: 434.611698ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:56:25.701786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:25.26565Z","time spent":"436.123945ms","remote":"127.0.0.1:51546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4271,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt\" "}
	{"level":"warn","ts":"2024-07-17T01:56:26.716347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"695.396771ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8305641285244617179 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" mod_revision:574 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" value_size:738 lease:8305641285244616606 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T01:56:26.716501Z","caller":"traceutil/trace.go:171","msg":"trace[2086914611] linearizableReadLoop","detail":"{readStateIndex:651; appliedIndex:650; }","duration":"1.00942398s","start":"2024-07-17T01:56:25.707065Z","end":"2024-07-17T01:56:26.716489Z","steps":["trace[2086914611] 'read index received'  (duration: 313.824381ms)","trace[2086914611] 'applied index is now lower than readState.Index'  (duration: 695.598513ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T01:56:26.716556Z","caller":"traceutil/trace.go:171","msg":"trace[1610276483] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"1.010031762s","start":"2024-07-17T01:56:25.706518Z","end":"2024-07-17T01:56:26.71655Z","steps":["trace[1610276483] 'process raft request'  (duration: 314.360949ms)","trace[1610276483] 'compare'  (duration: 695.214667ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:56:26.716602Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:25.706497Z","time spent":"1.01007453s","remote":"127.0.0.1:51456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":833,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" mod_revision:574 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" value_size:738 lease:8305641285244616606 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" > >"}
	{"level":"warn","ts":"2024-07-17T01:56:26.718005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.010922243s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-738184\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-17T01:56:26.718109Z","caller":"traceutil/trace.go:171","msg":"trace[1211094792] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-738184; range_end:; response_count:1; response_revision:606; }","duration":"1.011060221s","start":"2024-07-17T01:56:25.70704Z","end":"2024-07-17T01:56:26.7181Z","steps":["trace[1211094792] 'agreement among raft nodes before linearized reading'  (duration: 1.009694735s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:56:26.718449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.254554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:56:26.71857Z","caller":"traceutil/trace.go:171","msg":"trace[1313638685] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:607; }","duration":"308.399448ms","start":"2024-07-17T01:56:26.410161Z","end":"2024-07-17T01:56:26.71856Z","steps":["trace[1313638685] 'agreement among raft nodes before linearized reading'  (duration: 308.174688ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:56:26.71866Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:26.410134Z","time spent":"308.516568ms","remote":"127.0.0.1:51396","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-17T01:56:26.718853Z","caller":"traceutil/trace.go:171","msg":"trace[707378877] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"1.007635692s","start":"2024-07-17T01:56:25.71121Z","end":"2024-07-17T01:56:26.718846Z","steps":["trace[707378877] 'process raft request'  (duration: 1.005585335s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:56:26.71894Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:25.711198Z","time spent":"1.007708404s","remote":"127.0.0.1:51546","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4278,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt\" mod_revision:594 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt\" value_size:4212 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt\" > >"}
	{"level":"warn","ts":"2024-07-17T01:56:26.72003Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:25.707028Z","time spent":"1.012988699s","remote":"127.0.0.1:51532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5824,"request content":"key:\"/registry/minions/default-k8s-diff-port-738184\" "}
	{"level":"info","ts":"2024-07-17T02:05:29.394435Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":815}
	{"level":"info","ts":"2024-07-17T02:05:29.404844Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":815,"took":"10.047148ms","hash":3591137703,"current-db-size-bytes":2297856,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2297856,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-17T02:05:29.404937Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3591137703,"revision":815,"compact-revision":-1}
	
	
	==> kernel <==
	 02:08:57 up 13 min,  0 users,  load average: 0.10, 0.10, 0.08
	Linux default-k8s-diff-port-738184 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] <==
	I0717 02:03:31.815858       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:05:30.819167       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:05:30.819582       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 02:05:31.820463       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:05:31.820546       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:05:31.820553       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:05:31.820478       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:05:31.820704       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:05:31.821977       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:06:31.821772       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:06:31.821882       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:06:31.821895       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:06:31.822987       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:06:31.823129       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:06:31.823179       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:08:31.822093       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:08:31.822220       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:08:31.822236       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:08:31.823266       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:08:31.823350       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:08:31.823464       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] <==
	I0717 02:03:14.466265       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:03:44.035230       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:03:44.474025       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:04:14.040525       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:04:14.480947       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:04:44.046971       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:04:44.489777       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:05:14.052644       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:05:14.498539       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:05:44.057819       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:05:44.508523       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:06:14.063336       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:06:14.515675       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:06:44.068745       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:06:44.524552       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:06:44.999056       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="287.384µs"
	I0717 02:06:56.998888       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="137.747µs"
	E0717 02:07:14.075515       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:07:14.532355       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:07:44.081123       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:07:44.541001       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:08:14.086439       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:08:14.549057       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:08:44.091661       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:08:44.557595       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] <==
	I0717 01:55:31.538476       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:55:31.548069       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.170"]
	I0717 01:55:31.583079       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:55:31.583110       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:55:31.583124       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:55:31.586434       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:55:31.586707       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:55:31.586882       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:55:31.588488       1 config.go:192] "Starting service config controller"
	I0717 01:55:31.588550       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:55:31.588604       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:55:31.588621       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:55:31.589974       1 config.go:319] "Starting node config controller"
	I0717 01:55:31.590018       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:55:31.689480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:55:31.689592       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:55:31.690146       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] <==
	I0717 01:55:28.549346       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:55:30.859934       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:55:30.860043       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:55:30.860057       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:55:30.860063       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:55:30.881177       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:55:30.881219       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:55:30.884126       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:55:30.884198       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:55:30.884807       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:55:30.885155       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:55:30.985541       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:06:33 default-k8s-diff-port-738184 kubelet[961]: E0717 02:06:33.998495     961 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 02:06:33 default-k8s-diff-port-738184 kubelet[961]: E0717 02:06:33.998810     961 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 02:06:33 default-k8s-diff-port-738184 kubelet[961]: E0717 02:06:33.999152     961 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94z56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-gcjkt_kube-system(1859140e-a901-43c2-8c04-b4f8eb63e774): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 02:06:33 default-k8s-diff-port-738184 kubelet[961]: E0717 02:06:33.999291     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:06:44 default-k8s-diff-port-738184 kubelet[961]: E0717 02:06:44.983752     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:06:56 default-k8s-diff-port-738184 kubelet[961]: E0717 02:06:56.984094     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:07:08 default-k8s-diff-port-738184 kubelet[961]: E0717 02:07:08.984801     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:07:22 default-k8s-diff-port-738184 kubelet[961]: E0717 02:07:22.984116     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:07:27 default-k8s-diff-port-738184 kubelet[961]: E0717 02:07:27.002771     961 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:07:27 default-k8s-diff-port-738184 kubelet[961]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:07:27 default-k8s-diff-port-738184 kubelet[961]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:07:27 default-k8s-diff-port-738184 kubelet[961]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:07:27 default-k8s-diff-port-738184 kubelet[961]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:07:34 default-k8s-diff-port-738184 kubelet[961]: E0717 02:07:34.984518     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:07:45 default-k8s-diff-port-738184 kubelet[961]: E0717 02:07:45.982822     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:07:59 default-k8s-diff-port-738184 kubelet[961]: E0717 02:07:59.982855     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:08:10 default-k8s-diff-port-738184 kubelet[961]: E0717 02:08:10.983104     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:08:23 default-k8s-diff-port-738184 kubelet[961]: E0717 02:08:23.983335     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:08:27 default-k8s-diff-port-738184 kubelet[961]: E0717 02:08:27.002142     961 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:08:27 default-k8s-diff-port-738184 kubelet[961]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:08:27 default-k8s-diff-port-738184 kubelet[961]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:08:27 default-k8s-diff-port-738184 kubelet[961]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:08:27 default-k8s-diff-port-738184 kubelet[961]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:08:36 default-k8s-diff-port-738184 kubelet[961]: E0717 02:08:36.984035     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:08:49 default-k8s-diff-port-738184 kubelet[961]: E0717 02:08:49.983234     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	
	
	==> storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] <==
	I0717 01:55:31.447581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 01:56:01.452583       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] <==
	I0717 01:56:02.369509       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:56:02.384867       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:56:02.385109       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:56:02.400873       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:56:02.401823       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b79946a-8182-4a23-9abd-d389f8d21444", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-738184_59d650a1-827b-4c8b-a09e-040700e3c482 became leader
	I0717 01:56:02.402182       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-738184_59d650a1-827b-4c8b-a09e-040700e3c482!
	I0717 01:56:02.504549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-738184_59d650a1-827b-4c8b-a09e-040700e3c482!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-738184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-gcjkt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-738184 describe pod metrics-server-569cc877fc-gcjkt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-738184 describe pod metrics-server-569cc877fc-gcjkt: exit status 1 (60.011382ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-gcjkt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-738184 describe pod metrics-server-569cc877fc-gcjkt: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 02:01:01.427746   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 02:01:03.459039   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 02:01:58.313327   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-940222 -n embed-certs-940222
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 02:09:59.621319322 +0000 UTC m=+6498.669200428
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-940222 -n embed-certs-940222
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-940222 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-940222 logs -n 25: (2.124640971s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-894370 sudo cat                              | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo find                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo crio                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-894370                                       | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255698 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | disable-driver-mounts-255698                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:48 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-940222            | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-738184  | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-391501             | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-940222                 | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-901761        | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 02:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-738184       | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-391501                  | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:59 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-391501 --memory=2200                     | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 02:02 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901761             | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:51:47
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:51:47.395737   71929 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:51:47.396000   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396010   71929 out.go:304] Setting ErrFile to fd 2...
	I0717 01:51:47.396016   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396184   71929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:51:47.396684   71929 out.go:298] Setting JSON to false
	I0717 01:51:47.397549   71929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5649,"bootTime":1721175458,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:51:47.397606   71929 start.go:139] virtualization: kvm guest
	I0717 01:51:47.399758   71929 out.go:177] * [old-k8s-version-901761] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:51:47.400960   71929 notify.go:220] Checking for updates...
	I0717 01:51:47.400966   71929 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:51:47.402266   71929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:51:47.403356   71929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:51:47.404532   71929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:51:47.405524   71929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:51:47.406572   71929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:51:47.407935   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:51:47.408358   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.408427   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.422931   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0717 01:51:47.423315   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.423809   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.423831   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.424123   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.424259   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.426227   71929 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 01:51:47.427500   71929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:51:47.427770   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.427801   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.442080   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I0717 01:51:47.442438   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.442901   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.442924   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.443208   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.443382   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.476327   71929 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:51:47.477607   71929 start.go:297] selected driver: kvm2
	I0717 01:51:47.477620   71929 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.477762   71929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:51:47.478432   71929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.478541   71929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:51:47.493611   71929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:51:47.493967   71929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:51:47.494039   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:51:47.494056   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:51:47.494147   71929 start.go:340] cluster config:
	{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.494271   71929 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.496056   71929 out.go:177] * Starting "old-k8s-version-901761" primary control-plane node in "old-k8s-version-901761" cluster
	I0717 01:51:45.178864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:47.497229   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:51:47.497266   71929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:51:47.497279   71929 cache.go:56] Caching tarball of preloaded images
	I0717 01:51:47.497368   71929 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:51:47.497379   71929 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:51:47.497484   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:51:47.497671   71929 start.go:360] acquireMachinesLock for old-k8s-version-901761: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:51:51.258826   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:54.330879   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:00.410811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:03.482811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:09.562828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:12.634828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:18.714910   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:21.786892   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:27.866863   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:30.938805   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:37.022827   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:40.090853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:46.170839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:49.242854   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:55.322824   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:58.394792   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:04.474811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:07.546855   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:13.626861   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:16.698832   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:22.778828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:25.850864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:31.930814   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:35.002842   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:41.082839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:44.154796   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:50.234823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:53.306914   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:59.386835   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:02.458751   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:08.538853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:11.610833   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:17.690816   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:20.762793   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:26.842837   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:29.914866   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:35.994838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:39.066806   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:45.146846   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:48.218841   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:54.298823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:57.370838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:55:00.375050   71522 start.go:364] duration metric: took 3m54.700923144s to acquireMachinesLock for "default-k8s-diff-port-738184"
	I0717 01:55:00.375103   71522 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:00.375110   71522 fix.go:54] fixHost starting: 
	I0717 01:55:00.375500   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:00.375532   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:00.390583   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0717 01:55:00.390957   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:00.391392   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:00.391412   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:00.391704   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:00.391927   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:00.392069   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:00.393467   71522 fix.go:112] recreateIfNeeded on default-k8s-diff-port-738184: state=Stopped err=<nil>
	I0717 01:55:00.393508   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	W0717 01:55:00.393658   71522 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:00.395826   71522 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-738184" ...
	I0717 01:55:00.397256   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Start
	I0717 01:55:00.397401   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring networks are active...
	I0717 01:55:00.398079   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network default is active
	I0717 01:55:00.398390   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network mk-default-k8s-diff-port-738184 is active
	I0717 01:55:00.398710   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Getting domain xml...
	I0717 01:55:00.399275   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Creating domain...
	I0717 01:55:00.372573   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:00.372621   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.372933   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:55:00.372957   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.373131   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:55:00.374934   71146 machine.go:97] duration metric: took 4m37.428393808s to provisionDockerMachine
	I0717 01:55:00.374969   71146 fix.go:56] duration metric: took 4m37.449104762s for fixHost
	I0717 01:55:00.374974   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 4m37.449121677s
	W0717 01:55:00.374996   71146 start.go:714] error starting host: provision: host is not running
	W0717 01:55:00.375080   71146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 01:55:00.375088   71146 start.go:729] Will try again in 5 seconds ...
	I0717 01:55:01.590292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting to get IP...
	I0717 01:55:01.591187   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591589   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591657   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.591578   72583 retry.go:31] will retry after 266.165899ms: waiting for machine to come up
	I0717 01:55:01.859307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859724   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859751   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.859695   72583 retry.go:31] will retry after 282.941451ms: waiting for machine to come up
	I0717 01:55:02.144389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144756   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144787   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.144701   72583 retry.go:31] will retry after 327.203414ms: waiting for machine to come up
	I0717 01:55:02.473217   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473681   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473705   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.473606   72583 retry.go:31] will retry after 553.917043ms: waiting for machine to come up
	I0717 01:55:03.029379   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029762   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.029738   72583 retry.go:31] will retry after 617.312209ms: waiting for machine to come up
	I0717 01:55:03.648372   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648701   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648733   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.648670   72583 retry.go:31] will retry after 641.28503ms: waiting for machine to come up
	I0717 01:55:04.291493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.291986   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.292019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:04.291870   72583 retry.go:31] will retry after 1.133455116s: waiting for machine to come up
	I0717 01:55:05.426672   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426943   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426972   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:05.426892   72583 retry.go:31] will retry after 1.00384113s: waiting for machine to come up
	I0717 01:55:05.376907   71146 start.go:360] acquireMachinesLock for embed-certs-940222: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:55:06.432146   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432502   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432525   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:06.432477   72583 retry.go:31] will retry after 1.472142907s: waiting for machine to come up
	I0717 01:55:07.906974   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907407   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907437   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:07.907336   72583 retry.go:31] will retry after 1.775986179s: waiting for machine to come up
	I0717 01:55:09.685396   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685792   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685822   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:09.685756   72583 retry.go:31] will retry after 2.663700716s: waiting for machine to come up
	I0717 01:55:12.351616   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.351985   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.352017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:12.351921   72583 retry.go:31] will retry after 2.409004894s: waiting for machine to come up
	I0717 01:55:14.763493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763859   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763876   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:14.763828   72583 retry.go:31] will retry after 3.049843419s: waiting for machine to come up
	I0717 01:55:19.031713   71603 start.go:364] duration metric: took 4m8.751453112s to acquireMachinesLock for "no-preload-391501"
	I0717 01:55:19.031779   71603 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:19.031787   71603 fix.go:54] fixHost starting: 
	I0717 01:55:19.032306   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:19.032352   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:19.049376   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0717 01:55:19.049877   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:19.050387   71603 main.go:141] libmachine: Using API Version  1
	I0717 01:55:19.050409   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:19.050752   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:19.050935   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:19.051104   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 01:55:19.052805   71603 fix.go:112] recreateIfNeeded on no-preload-391501: state=Stopped err=<nil>
	I0717 01:55:19.052832   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	W0717 01:55:19.052989   71603 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:19.056667   71603 out.go:177] * Restarting existing kvm2 VM for "no-preload-391501" ...
	I0717 01:55:19.058078   71603 main.go:141] libmachine: (no-preload-391501) Calling .Start
	I0717 01:55:19.058314   71603 main.go:141] libmachine: (no-preload-391501) Ensuring networks are active...
	I0717 01:55:19.059126   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network default is active
	I0717 01:55:19.059466   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network mk-no-preload-391501 is active
	I0717 01:55:19.059958   71603 main.go:141] libmachine: (no-preload-391501) Getting domain xml...
	I0717 01:55:19.060746   71603 main.go:141] libmachine: (no-preload-391501) Creating domain...
	I0717 01:55:17.816307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.816746   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Found IP for machine: 192.168.39.170
	I0717 01:55:17.816765   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserving static IP address...
	I0717 01:55:17.816776   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has current primary IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.817337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserved static IP address: 192.168.39.170
	I0717 01:55:17.817366   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for SSH to be available...
	I0717 01:55:17.817389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.817420   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | skip adding static IP to network mk-default-k8s-diff-port-738184 - found existing host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"}
	I0717 01:55:17.817443   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Getting to WaitForSSH function...
	I0717 01:55:17.819693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820022   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.820056   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820171   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH client type: external
	I0717 01:55:17.820203   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa (-rw-------)
	I0717 01:55:17.820245   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:17.820259   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | About to run SSH command:
	I0717 01:55:17.820280   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | exit 0
	I0717 01:55:17.942987   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:17.943370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetConfigRaw
	I0717 01:55:17.943945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:17.946638   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.946993   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.947021   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.947268   71522 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/config.json ...
	I0717 01:55:17.947479   71522 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:17.947497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:17.947732   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:17.950032   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.950397   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950489   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:17.950664   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950827   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950959   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:17.951108   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:17.951300   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:17.951311   71522 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:18.051147   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:18.051180   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051421   71522 buildroot.go:166] provisioning hostname "default-k8s-diff-port-738184"
	I0717 01:55:18.051456   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051655   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.054480   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055024   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.055053   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055262   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.055473   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055643   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.055928   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.056077   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.056089   71522 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-738184 && echo "default-k8s-diff-port-738184" | sudo tee /etc/hostname
	I0717 01:55:18.170268   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-738184
	
	I0717 01:55:18.170299   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.173037   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.173369   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173485   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.173673   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173851   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173957   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.174110   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.174322   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.174349   71522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-738184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-738184/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-738184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:18.279963   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:18.279997   71522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:18.280030   71522 buildroot.go:174] setting up certificates
	I0717 01:55:18.280042   71522 provision.go:84] configureAuth start
	I0717 01:55:18.280054   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.280393   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:18.282887   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283201   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.283231   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.285399   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.285691   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285795   71522 provision.go:143] copyHostCerts
	I0717 01:55:18.285865   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:18.285884   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:18.285971   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:18.286084   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:18.286095   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:18.286129   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:18.286205   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:18.286214   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:18.286247   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:18.286313   71522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-738184 san=[127.0.0.1 192.168.39.170 default-k8s-diff-port-738184 localhost minikube]
	I0717 01:55:18.386547   71522 provision.go:177] copyRemoteCerts
	I0717 01:55:18.386627   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:18.386658   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.388930   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.389322   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389465   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.389662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.389804   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.389944   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.469031   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:18.493607   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 01:55:18.517024   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:18.539757   71522 provision.go:87] duration metric: took 259.702663ms to configureAuth
	I0717 01:55:18.539793   71522 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:18.540064   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:18.540178   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.542831   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543174   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.543196   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543388   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.543599   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.543843   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.544011   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.544172   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.544343   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.544362   71522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:18.804633   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:18.804690   71522 machine.go:97] duration metric: took 857.197634ms to provisionDockerMachine
	I0717 01:55:18.804706   71522 start.go:293] postStartSetup for "default-k8s-diff-port-738184" (driver="kvm2")
	I0717 01:55:18.804720   71522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:18.804743   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:18.805049   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:18.805073   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.807835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808127   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.808147   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808319   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.808497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.808670   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.808823   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.889297   71522 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:18.893587   71522 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:18.893615   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:18.893694   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:18.893779   71522 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:18.893886   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:18.903319   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:18.927700   71522 start.go:296] duration metric: took 122.979492ms for postStartSetup
	I0717 01:55:18.927748   71522 fix.go:56] duration metric: took 18.552636525s for fixHost
	I0717 01:55:18.927775   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.930483   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.930768   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.930791   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.931004   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.931192   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931511   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.931677   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.931873   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.931887   71522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:19.031515   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181319.004563133
	
	I0717 01:55:19.031541   71522 fix.go:216] guest clock: 1721181319.004563133
	I0717 01:55:19.031552   71522 fix.go:229] Guest: 2024-07-17 01:55:19.004563133 +0000 UTC Remote: 2024-07-17 01:55:18.927754613 +0000 UTC m=+253.390645105 (delta=76.80852ms)
	I0717 01:55:19.031611   71522 fix.go:200] guest clock delta is within tolerance: 76.80852ms
	I0717 01:55:19.031623   71522 start.go:83] releasing machines lock for "default-k8s-diff-port-738184", held for 18.656540342s
	I0717 01:55:19.031661   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.031940   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:19.034537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.034881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.034911   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.035036   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035557   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035750   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035822   71522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:19.035875   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.036000   71522 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:19.036027   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.038509   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038860   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.038892   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038935   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038982   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039156   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039328   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.039361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.039389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.039488   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.039537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039702   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.040047   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.140208   71522 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:19.146454   71522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:19.293584   71522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:19.300750   71522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:19.300817   71522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:19.321596   71522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:19.321621   71522 start.go:495] detecting cgroup driver to use...
	I0717 01:55:19.321684   71522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:19.337664   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:19.351856   71522 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:19.351922   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:19.366355   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:19.380735   71522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:19.495916   71522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:19.646426   71522 docker.go:233] disabling docker service ...
	I0717 01:55:19.646501   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:19.665764   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:19.683893   71522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:19.814704   71522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:19.958389   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:19.973223   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:19.992869   71522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:55:19.992937   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.003696   71522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:20.003762   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.014415   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.025303   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.036715   71522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:20.047872   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.059666   71522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.079479   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.092424   71522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:20.103225   71522 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:20.103284   71522 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:20.120620   71522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:20.136439   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:20.284796   71522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:20.427605   71522 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:20.427698   71522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:20.433477   71522 start.go:563] Will wait 60s for crictl version
	I0717 01:55:20.433537   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:55:20.437399   71522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:20.479192   71522 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:20.479289   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.507655   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.537084   71522 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:55:20.538435   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:20.541200   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:20.541531   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541772   71522 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:20.546261   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:20.559802   71522 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:20.559946   71522 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:55:20.560001   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:20.381503   71603 main.go:141] libmachine: (no-preload-391501) Waiting to get IP...
	I0717 01:55:20.382632   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.383105   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.383210   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.383077   72724 retry.go:31] will retry after 193.198351ms: waiting for machine to come up
	I0717 01:55:20.577611   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.578117   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.578145   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.578067   72724 retry.go:31] will retry after 254.406992ms: waiting for machine to come up
	I0717 01:55:20.834633   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.835088   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.835116   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.835057   72724 retry.go:31] will retry after 459.446617ms: waiting for machine to come up
	I0717 01:55:21.295939   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.296384   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.296409   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.296343   72724 retry.go:31] will retry after 515.654185ms: waiting for machine to come up
	I0717 01:55:21.813613   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.814140   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.814178   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.814104   72724 retry.go:31] will retry after 652.322198ms: waiting for machine to come up
	I0717 01:55:22.468223   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:22.468858   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:22.468897   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:22.468774   72724 retry.go:31] will retry after 767.220835ms: waiting for machine to come up
	I0717 01:55:23.237341   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:23.237685   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:23.237716   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:23.237633   72724 retry.go:31] will retry after 1.083873631s: waiting for machine to come up
	I0717 01:55:24.323463   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:24.323983   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:24.324011   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:24.323934   72724 retry.go:31] will retry after 1.255667305s: waiting for machine to come up
	I0717 01:55:20.597329   71522 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:55:20.597409   71522 ssh_runner.go:195] Run: which lz4
	I0717 01:55:20.602100   71522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:55:20.606863   71522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:55:20.606900   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:55:22.053002   71522 crio.go:462] duration metric: took 1.450939378s to copy over tarball
	I0717 01:55:22.053071   71522 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:24.356349   71522 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.303245698s)
	I0717 01:55:24.356378   71522 crio.go:469] duration metric: took 2.303353381s to extract the tarball
	I0717 01:55:24.356385   71522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:55:24.402866   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:24.446681   71522 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:55:24.446709   71522 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:55:24.446720   71522 kubeadm.go:934] updating node { 192.168.39.170 8444 v1.30.2 crio true true} ...
	I0717 01:55:24.446844   71522 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-738184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:24.446931   71522 ssh_runner.go:195] Run: crio config
	I0717 01:55:24.499717   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:24.499744   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:24.499759   71522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:24.499787   71522 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-738184 NodeName:default-k8s-diff-port-738184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:24.499965   71522 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-738184"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:24.500039   71522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:55:24.510488   71522 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:24.510568   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:24.520830   71522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 01:55:24.538018   71522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:55:24.556287   71522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 01:55:24.574973   71522 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:24.579058   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:24.591752   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:24.712285   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:24.729387   71522 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184 for IP: 192.168.39.170
	I0717 01:55:24.729411   71522 certs.go:194] generating shared ca certs ...
	I0717 01:55:24.729432   71522 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:24.729596   71522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:24.729650   71522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:24.729662   71522 certs.go:256] generating profile certs ...
	I0717 01:55:24.729776   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/client.key
	I0717 01:55:24.729847   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key.44902a6f
	I0717 01:55:24.729907   71522 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key
	I0717 01:55:24.730044   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:24.730086   71522 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:24.730099   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:24.730135   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:24.730183   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:24.730222   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:24.730277   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:24.731142   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:24.762240   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:24.788746   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:24.825379   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:24.853821   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 01:55:24.887105   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:55:24.910834   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:24.934566   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:55:24.959709   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:24.983722   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:25.007312   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:25.031576   71522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:25.049348   71522 ssh_runner.go:195] Run: openssl version
	I0717 01:55:25.055410   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:25.066104   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070616   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070675   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.076604   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:25.087284   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:25.098383   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103262   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103331   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.109170   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:25.119940   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:25.130829   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135659   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135734   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.141583   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:25.152770   71522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:25.157395   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:25.163543   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:25.169580   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:25.175754   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:25.181771   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:25.187935   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:25.193614   71522 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:25.193727   71522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:25.193770   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.230871   71522 cri.go:89] found id: ""
	I0717 01:55:25.230954   71522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:25.241336   71522 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:25.241357   71522 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:25.241410   71522 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:25.251637   71522 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:25.253030   71522 kubeconfig.go:125] found "default-k8s-diff-port-738184" server: "https://192.168.39.170:8444"
	I0717 01:55:25.255926   71522 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:25.265878   71522 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.170
	I0717 01:55:25.265915   71522 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:25.265927   71522 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:25.265982   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.305929   71522 cri.go:89] found id: ""
	I0717 01:55:25.306015   71522 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:25.322581   71522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:25.332334   71522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:25.332356   71522 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:25.332407   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:55:25.342132   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:25.342193   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:25.351628   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:55:25.360765   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:25.360833   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:25.370167   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.379057   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:25.379124   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.389470   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:55:25.399142   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:25.399210   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:25.409452   71522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:25.421509   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.545698   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.580838   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:25.581295   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:25.581322   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:25.581247   72724 retry.go:31] will retry after 1.354947672s: waiting for machine to come up
	I0717 01:55:26.937260   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:26.937746   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:26.937774   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:26.937696   72724 retry.go:31] will retry after 1.818074273s: waiting for machine to come up
	I0717 01:55:28.758015   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:28.758489   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:28.758517   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:28.758449   72724 retry.go:31] will retry after 2.782465023s: waiting for machine to come up
	I0717 01:55:26.599380   71522 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053644988s)
	I0717 01:55:26.599416   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.807765   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.878767   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.965940   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:26.966023   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.466587   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.966138   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.983649   71522 api_server.go:72] duration metric: took 1.017709312s to wait for apiserver process to appear ...
	I0717 01:55:27.983678   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:55:27.983701   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:27.984214   71522 api_server.go:269] stopped: https://192.168.39.170:8444/healthz: Get "https://192.168.39.170:8444/healthz": dial tcp 192.168.39.170:8444: connect: connection refused
	I0717 01:55:28.483780   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.862416   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.862464   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.862479   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.869667   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.869718   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.983899   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.988670   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:30.988704   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.484233   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.488939   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:31.488978   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.984611   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.988738   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:55:31.996182   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:55:31.996207   71522 api_server.go:131] duration metric: took 4.012523131s to wait for apiserver health ...
	I0717 01:55:31.996216   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:31.996222   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:31.998122   71522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:55:31.999536   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:55:32.010501   71522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:55:32.030227   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:55:32.039923   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:55:32.039954   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039988   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039998   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:55:32.040003   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:55:32.040013   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:55:32.040020   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:55:32.040033   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:55:32.040041   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:55:32.040046   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:55:32.040053   71522 system_pods.go:74] duration metric: took 9.802793ms to wait for pod list to return data ...
	I0717 01:55:32.040060   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:55:32.043233   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:55:32.043259   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:55:32.043270   71522 node_conditions.go:105] duration metric: took 3.202451ms to run NodePressure ...
	I0717 01:55:32.043285   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:32.350948   71522 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356119   71522 kubeadm.go:739] kubelet initialised
	I0717 01:55:32.356143   71522 kubeadm.go:740] duration metric: took 5.164025ms waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356153   71522 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:32.361501   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.366747   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366770   71522 pod_ready.go:81] duration metric: took 5.246954ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.366778   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366785   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.371049   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371066   71522 pod_ready.go:81] duration metric: took 4.275157ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.371073   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371078   71522 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.375338   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375361   71522 pod_ready.go:81] duration metric: took 4.27092ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.375369   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375379   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.434545   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434583   71522 pod_ready.go:81] duration metric: took 59.196717ms for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.434593   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434601   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.836139   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836178   71522 pod_ready.go:81] duration metric: took 401.568097ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.836194   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836212   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.234032   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234060   71522 pod_ready.go:81] duration metric: took 397.83937ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.234071   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234076   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.633953   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633981   71522 pod_ready.go:81] duration metric: took 399.893316ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.633992   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633998   71522 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:34.034511   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034560   71522 pod_ready.go:81] duration metric: took 400.544281ms for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:34.034574   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034583   71522 pod_ready.go:38] duration metric: took 1.678420144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:34.034599   71522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:55:34.049235   71522 ops.go:34] apiserver oom_adj: -16
	I0717 01:55:34.049261   71522 kubeadm.go:597] duration metric: took 8.807897214s to restartPrimaryControlPlane
	I0717 01:55:34.049272   71522 kubeadm.go:394] duration metric: took 8.855664434s to StartCluster
	I0717 01:55:34.049292   71522 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.049374   71522 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:55:34.050992   71522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.051239   71522 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:55:34.051307   71522 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:55:34.051409   71522 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051454   71522 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051465   71522 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:55:34.051497   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051511   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:34.051498   71522 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051502   71522 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051564   71522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-738184"
	I0717 01:55:34.051587   71522 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051612   71522 addons.go:243] addon metrics-server should already be in state true
	I0717 01:55:34.051686   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051803   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.051845   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052097   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052151   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052331   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052383   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.054788   71522 out.go:177] * Verifying Kubernetes components...
	I0717 01:55:34.056293   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0717 01:55:34.067821   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.067911   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068370   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068390   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068515   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068526   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0717 01:55:34.068535   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068709   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.068991   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068997   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.069278   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069320   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069529   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069560   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069611   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.069629   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.069977   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.070184   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.074013   71522 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.074036   71522 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:55:34.074062   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.074422   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.074463   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.085256   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0717 01:55:34.085694   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0717 01:55:34.085716   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086207   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086378   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086402   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.086785   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.086945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.086947   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086999   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.087327   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.087624   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.088695   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.089320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.090932   71522 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:55:34.090932   71522 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:31.543587   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:31.544073   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:31.544102   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:31.544012   72724 retry.go:31] will retry after 2.898539616s: waiting for machine to come up
	I0717 01:55:34.444315   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:34.444828   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:34.444870   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:34.444790   72724 retry.go:31] will retry after 4.252719028s: waiting for machine to come up
	I0717 01:55:34.092892   71522 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.092910   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:55:34.092926   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.092985   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:55:34.092993   71522 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:55:34.093003   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.095340   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0717 01:55:34.095840   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.096397   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.096434   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.096567   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.096819   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.096979   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097029   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097058   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097498   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.097536   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.097881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097897   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097899   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097923   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.098075   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098105   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098286   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098449   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.098461   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.113190   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0717 01:55:34.113544   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.114033   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.114059   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.114375   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.114575   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.116332   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.116544   71522 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.116563   71522 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:55:34.116583   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.119693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.119992   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.120017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.120457   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.120722   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.120965   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.121652   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.247964   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:34.266521   71522 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:34.370296   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:55:34.370318   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:55:34.380102   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.394620   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:55:34.394639   71522 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:55:34.409328   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.416653   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:34.416684   71522 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:55:34.445296   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:35.605781   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196419762s)
	I0717 01:55:35.605843   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605858   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605854   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160520147s)
	I0717 01:55:35.605778   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.225640358s)
	I0717 01:55:35.605929   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605944   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605988   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606007   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606293   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606300   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606309   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606315   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606319   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606329   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606333   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606349   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606357   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606371   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606398   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606410   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606424   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606640   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607811   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607852   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607866   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607874   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607892   71522 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-738184"
	I0717 01:55:35.607815   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607878   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607829   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607959   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607842   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.613691   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.613717   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.614019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.614025   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.614081   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.615871   71522 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0717 01:55:38.700025   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.700533   71603 main.go:141] libmachine: (no-preload-391501) Found IP for machine: 192.168.61.174
	I0717 01:55:38.700555   71603 main.go:141] libmachine: (no-preload-391501) Reserving static IP address...
	I0717 01:55:38.700572   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has current primary IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.701013   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.701033   71603 main.go:141] libmachine: (no-preload-391501) Reserved static IP address: 192.168.61.174
	I0717 01:55:38.701049   71603 main.go:141] libmachine: (no-preload-391501) DBG | skip adding static IP to network mk-no-preload-391501 - found existing host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"}
	I0717 01:55:38.701064   71603 main.go:141] libmachine: (no-preload-391501) DBG | Getting to WaitForSSH function...
	I0717 01:55:38.701080   71603 main.go:141] libmachine: (no-preload-391501) Waiting for SSH to be available...
	I0717 01:55:38.703218   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703577   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.703605   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703755   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH client type: external
	I0717 01:55:38.703773   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa (-rw-------)
	I0717 01:55:38.703791   71603 main.go:141] libmachine: (no-preload-391501) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:38.703809   71603 main.go:141] libmachine: (no-preload-391501) DBG | About to run SSH command:
	I0717 01:55:38.703817   71603 main.go:141] libmachine: (no-preload-391501) DBG | exit 0
	I0717 01:55:38.827046   71603 main.go:141] libmachine: (no-preload-391501) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:38.827413   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetConfigRaw
	I0717 01:55:38.828102   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:38.831229   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.831782   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.831814   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.832140   71603 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/config.json ...
	I0717 01:55:38.832347   71603 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:38.832367   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:38.832574   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.835302   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835710   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.835735   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835954   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.836173   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836345   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836521   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.836691   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.836928   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.836947   71603 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:38.943173   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:38.943213   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943491   71603 buildroot.go:166] provisioning hostname "no-preload-391501"
	I0717 01:55:38.943513   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943725   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.946396   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946872   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.946900   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946980   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.947164   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947339   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947518   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.947695   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.947849   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.947869   71603 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-391501 && echo "no-preload-391501" | sudo tee /etc/hostname
	I0717 01:55:39.070382   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-391501
	
	I0717 01:55:39.070429   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.073539   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.073904   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.073941   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.074203   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.074426   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074624   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.075132   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.075348   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.075373   71603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-391501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-391501/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-391501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:39.195604   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:39.195634   71603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:39.195649   71603 buildroot.go:174] setting up certificates
	I0717 01:55:39.195656   71603 provision.go:84] configureAuth start
	I0717 01:55:39.195665   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:39.195952   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:39.198409   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198792   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.198822   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198996   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.201509   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.201870   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.201901   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.202078   71603 provision.go:143] copyHostCerts
	I0717 01:55:39.202153   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:39.202166   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:39.202221   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:39.202313   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:39.202320   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:39.202339   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:39.202387   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:39.202394   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:39.202410   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:39.202456   71603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.no-preload-391501 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-391501]
	I0717 01:55:39.550166   71603 provision.go:177] copyRemoteCerts
	I0717 01:55:39.550224   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:39.550249   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.552616   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.552990   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.553020   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.553135   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.553298   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.553460   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.553559   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:39.638467   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:39.664166   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:55:39.689416   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:55:39.714130   71603 provision.go:87] duration metric: took 518.463378ms to configureAuth
	I0717 01:55:39.714159   71603 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:39.714362   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:55:39.714440   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.717269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717694   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.717722   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.718080   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718240   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718393   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.718621   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.718793   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.718809   71603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:39.982066   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:39.982095   71603 machine.go:97] duration metric: took 1.149734372s to provisionDockerMachine
	I0717 01:55:39.982110   71603 start.go:293] postStartSetup for "no-preload-391501" (driver="kvm2")
	I0717 01:55:39.982127   71603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:39.982147   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:39.982429   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:39.982445   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.984935   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985232   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.985269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985372   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.985553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.985793   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.986010   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.074439   71603 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:40.079515   71603 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:40.079541   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:40.079617   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:40.079708   71603 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:40.079831   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:40.090783   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:40.121212   71603 start.go:296] duration metric: took 139.087761ms for postStartSetup
	I0717 01:55:40.121257   71603 fix.go:56] duration metric: took 21.089468917s for fixHost
	I0717 01:55:40.121281   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.124208   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124517   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.124545   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124753   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.124940   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125119   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125269   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.125430   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:40.125626   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:40.125638   71603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:40.239538   71929 start.go:364] duration metric: took 3m52.741834986s to acquireMachinesLock for "old-k8s-version-901761"
	I0717 01:55:40.239610   71929 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:40.239618   71929 fix.go:54] fixHost starting: 
	I0717 01:55:40.240021   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:40.240054   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:40.257464   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0717 01:55:40.257866   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:40.258287   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:55:40.258311   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:40.258672   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:40.258871   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:40.259041   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetState
	I0717 01:55:40.260529   71929 fix.go:112] recreateIfNeeded on old-k8s-version-901761: state=Stopped err=<nil>
	I0717 01:55:40.260568   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	W0717 01:55:40.260721   71929 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:40.262590   71929 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-901761" ...
	I0717 01:55:35.617123   71522 addons.go:510] duration metric: took 1.565817066s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0717 01:55:36.270109   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:38.270489   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.270966   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.239384   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181340.205508074
	
	I0717 01:55:40.239409   71603 fix.go:216] guest clock: 1721181340.205508074
	I0717 01:55:40.239419   71603 fix.go:229] Guest: 2024-07-17 01:55:40.205508074 +0000 UTC Remote: 2024-07-17 01:55:40.121261572 +0000 UTC m=+269.976034747 (delta=84.246502ms)
	I0717 01:55:40.239445   71603 fix.go:200] guest clock delta is within tolerance: 84.246502ms
	I0717 01:55:40.239453   71603 start.go:83] releasing machines lock for "no-preload-391501", held for 21.207695176s
	I0717 01:55:40.239486   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.239768   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:40.242534   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.242923   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.242956   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.243159   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243649   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243826   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243924   71603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:40.243975   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.244045   71603 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:40.244071   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.246599   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.246958   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.246984   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247089   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247153   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247254   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.247401   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.247486   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.247510   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247579   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.247669   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247861   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.248031   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.248169   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.328497   71603 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:40.350092   71603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:40.497644   71603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:40.504094   71603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:40.504164   71603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:40.526752   71603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:40.526777   71603 start.go:495] detecting cgroup driver to use...
	I0717 01:55:40.526842   71603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:40.543537   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:40.557551   71603 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:40.557606   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:40.571755   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:40.585548   71603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:40.702991   71603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:40.849192   71603 docker.go:233] disabling docker service ...
	I0717 01:55:40.849276   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:40.864697   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:40.877940   71603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:41.043588   71603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:41.175359   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:41.191170   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:41.212440   71603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:55:41.212508   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.224335   71603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:41.224411   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.235721   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.247575   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.260018   71603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:41.271526   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.285999   71603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.307653   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.319272   71603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:41.330544   71603 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:41.330637   71603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:41.346698   71603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:41.361983   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:41.490052   71603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:41.639509   71603 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:41.639626   71603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:41.646714   71603 start.go:563] Will wait 60s for crictl version
	I0717 01:55:41.646793   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.650900   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:41.688112   71603 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:41.688188   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.717335   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.750767   71603 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:55:40.263857   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .Start
	I0717 01:55:40.264019   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring networks are active...
	I0717 01:55:40.264709   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network default is active
	I0717 01:55:40.265165   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network mk-old-k8s-version-901761 is active
	I0717 01:55:40.265581   71929 main.go:141] libmachine: (old-k8s-version-901761) Getting domain xml...
	I0717 01:55:40.266340   71929 main.go:141] libmachine: (old-k8s-version-901761) Creating domain...
	I0717 01:55:41.562582   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting to get IP...
	I0717 01:55:41.563329   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.563802   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.563890   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.563781   72905 retry.go:31] will retry after 216.264296ms: waiting for machine to come up
	I0717 01:55:41.781168   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.781662   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.781690   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.781629   72905 retry.go:31] will retry after 275.269814ms: waiting for machine to come up
	I0717 01:55:42.058127   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.058525   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.058564   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.058498   72905 retry.go:31] will retry after 348.024497ms: waiting for machine to come up
	I0717 01:55:41.752123   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:41.755114   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755571   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:41.755602   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755863   71603 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:41.760869   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:41.775414   71603 kubeadm.go:883] updating cluster {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:41.775563   71603 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:55:41.775609   71603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:41.815115   71603 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 01:55:41.815141   71603 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.815241   71603 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.815279   71603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.815290   71603 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.815304   71603 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:55:41.815239   71603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.815258   71603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817894   71603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.817939   71603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817892   71603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.817888   71603 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:55:41.818033   71603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.817891   71603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.817900   71603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.817978   71603 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.014545   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 01:55:42.030064   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.034517   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.123584   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.130122   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.134935   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.136170   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.173650   71603 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 01:55:42.173707   71603 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.173718   71603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 01:55:42.173755   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.173767   71603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.173820   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.219689   71603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 01:55:42.219745   71603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.219792   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.240802   71603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 01:55:42.240847   71603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.240907   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.251152   71603 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 01:55:42.251189   71603 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.251225   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254790   71603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 01:55:42.254849   71603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.254886   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.254895   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254916   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.254951   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.255006   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.257984   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.267440   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.395407   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395471   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395513   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395522   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395558   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395582   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:55:42.395592   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395663   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:42.397740   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:55:42.397813   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:42.420577   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420602   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420619   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420640   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420662   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420676   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420705   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 01:55:42.420711   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420738   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 01:55:43.737662   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581683   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.160996964s)
	I0717 01:55:44.581730   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 01:55:44.581753   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581754   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.161058602s)
	I0717 01:55:44.581788   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 01:55:44.581810   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581858   71603 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 01:55:44.581900   71603 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581928   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.270830   71522 node_ready.go:49] node "default-k8s-diff-port-738184" has status "Ready":"True"
	I0717 01:55:41.270853   71522 node_ready.go:38] duration metric: took 7.004304151s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:41.270868   71522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:41.278587   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285210   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.285236   71522 pod_ready.go:81] duration metric: took 6.623347ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285250   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291110   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.291133   71522 pod_ready.go:81] duration metric: took 5.874809ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291145   71522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297614   71522 pod_ready.go:92] pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.297636   71522 pod_ready.go:81] duration metric: took 6.483783ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297645   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305307   71522 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.305335   71522 pod_ready.go:81] duration metric: took 1.007681338s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305349   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472190   71522 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.472222   71522 pod_ready.go:81] duration metric: took 166.864153ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472236   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871756   71522 pod_ready.go:92] pod "kube-proxy-c4n94" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.871780   71522 pod_ready.go:81] duration metric: took 399.536375ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871789   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272858   71522 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:43.272895   71522 pod_ready.go:81] duration metric: took 401.098971ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272913   71522 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:45.281019   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:42.407813   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.408311   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.408346   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.408218   72905 retry.go:31] will retry after 388.717436ms: waiting for machine to come up
	I0717 01:55:42.798810   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.799378   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.799411   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.799323   72905 retry.go:31] will retry after 661.391346ms: waiting for machine to come up
	I0717 01:55:43.462189   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:43.462654   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:43.462686   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:43.462603   72905 retry.go:31] will retry after 636.142497ms: waiting for machine to come up
	I0717 01:55:44.100416   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.100852   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.100874   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.100808   72905 retry.go:31] will retry after 781.652918ms: waiting for machine to come up
	I0717 01:55:44.883650   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.884137   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.884170   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.884088   72905 retry.go:31] will retry after 1.238608293s: waiting for machine to come up
	I0717 01:55:46.124419   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:46.124911   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:46.124942   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:46.124854   72905 retry.go:31] will retry after 1.169011508s: waiting for machine to come up
	I0717 01:55:47.295202   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:47.295679   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:47.295715   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:47.295632   72905 retry.go:31] will retry after 1.723987128s: waiting for machine to come up
	I0717 01:55:47.004929   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.423090292s)
	I0717 01:55:47.004968   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 01:55:47.004990   71603 ssh_runner.go:235] Completed: which crictl: (2.423045276s)
	I0717 01:55:47.005021   71603 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:47.005053   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:47.005067   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:49.097703   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.092610651s)
	I0717 01:55:49.097747   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 01:55:49.097776   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097836   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097776   71603 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.092700925s)
	I0717 01:55:49.097953   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 01:55:49.098050   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:47.781233   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.786039   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.020883   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:49.021363   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:49.021396   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:49.021279   72905 retry.go:31] will retry after 2.098481296s: waiting for machine to come up
	I0717 01:55:51.121693   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:51.122253   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:51.122282   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:51.122192   72905 retry.go:31] will retry after 2.624839429s: waiting for machine to come up
	I0717 01:55:50.560197   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.462322087s)
	I0717 01:55:50.560292   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 01:55:50.560323   71603 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560252   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.462175943s)
	I0717 01:55:50.560373   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560388   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 01:55:53.630471   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.070071936s)
	I0717 01:55:53.630509   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 01:55:53.630529   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:53.630604   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:52.280585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:54.779606   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:53.748796   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:53.749348   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:53.749390   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:53.749298   72905 retry.go:31] will retry after 3.47930356s: waiting for machine to come up
	I0717 01:55:57.231901   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232407   71929 main.go:141] libmachine: (old-k8s-version-901761) Found IP for machine: 192.168.50.44
	I0717 01:55:57.232437   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has current primary IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232449   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserving static IP address...
	I0717 01:55:57.232880   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserved static IP address: 192.168.50.44
	I0717 01:55:57.232928   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.232937   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting for SSH to be available...
	I0717 01:55:57.232952   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | skip adding static IP to network mk-old-k8s-version-901761 - found existing host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"}
	I0717 01:55:57.232971   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Getting to WaitForSSH function...
	I0717 01:55:57.235007   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235208   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.235242   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235421   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH client type: external
	I0717 01:55:57.235461   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa (-rw-------)
	I0717 01:55:57.235502   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:57.235516   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | About to run SSH command:
	I0717 01:55:57.235530   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | exit 0
	I0717 01:55:57.362619   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:57.363106   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetConfigRaw
	I0717 01:55:57.363760   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.366213   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366636   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.366666   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366958   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:55:57.367165   71929 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:57.367188   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:57.367392   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.370017   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370354   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.370371   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370577   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.370765   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.370935   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.371084   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.371325   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.371506   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.371518   71929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:58.531714   71146 start.go:364] duration metric: took 53.154741813s to acquireMachinesLock for "embed-certs-940222"
	I0717 01:55:58.531773   71146 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:58.531784   71146 fix.go:54] fixHost starting: 
	I0717 01:55:58.532189   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:58.532237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:58.549026   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0717 01:55:58.549491   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:58.550001   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:55:58.550025   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:58.550363   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:58.550536   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:55:58.550707   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:55:58.552236   71146 fix.go:112] recreateIfNeeded on embed-certs-940222: state=Stopped err=<nil>
	I0717 01:55:58.552259   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	W0717 01:55:58.552397   71146 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:58.554487   71146 out.go:177] * Restarting existing kvm2 VM for "embed-certs-940222" ...
	I0717 01:55:57.478893   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:57.478921   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479123   71929 buildroot.go:166] provisioning hostname "old-k8s-version-901761"
	I0717 01:55:57.479142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479330   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.482163   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482531   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.482579   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482739   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.482937   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483111   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483264   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.483454   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.483632   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.483648   71929 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-901761 && echo "old-k8s-version-901761" | sudo tee /etc/hostname
	I0717 01:55:57.613409   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-901761
	
	I0717 01:55:57.613440   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.616228   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616614   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.616655   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616860   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.617040   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617222   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617383   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.617574   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.617778   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.617794   71929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-901761' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-901761/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-901761' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:57.737648   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:57.737683   71929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:57.737703   71929 buildroot.go:174] setting up certificates
	I0717 01:55:57.737711   71929 provision.go:84] configureAuth start
	I0717 01:55:57.737721   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.738028   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.741089   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741532   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.741556   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741741   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.744444   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.744917   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.744947   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.745111   71929 provision.go:143] copyHostCerts
	I0717 01:55:57.745185   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:57.745202   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:57.745273   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:57.745393   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:57.745405   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:57.745437   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:57.745517   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:57.745527   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:57.745545   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:57.745602   71929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-901761 san=[127.0.0.1 192.168.50.44 localhost minikube old-k8s-version-901761]
	I0717 01:55:57.830872   71929 provision.go:177] copyRemoteCerts
	I0717 01:55:57.830939   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:57.830972   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.833463   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833741   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.833777   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833887   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.834083   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.834250   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.834403   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:57.918346   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:57.954250   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:57.979770   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:55:58.005161   71929 provision.go:87] duration metric: took 267.436975ms to configureAuth
	I0717 01:55:58.005193   71929 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:58.005412   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:55:58.005493   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.008255   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008626   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.008663   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008833   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.009006   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009170   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009298   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.009464   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.009616   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.009639   71929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:58.281081   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:58.281112   71929 machine.go:97] duration metric: took 913.933405ms to provisionDockerMachine
	I0717 01:55:58.281121   71929 start.go:293] postStartSetup for "old-k8s-version-901761" (driver="kvm2")
	I0717 01:55:58.281130   71929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:58.281144   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.281497   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:58.281533   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.284465   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.284812   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.284840   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.285023   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.285207   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.285441   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.285650   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.377149   71929 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:58.381709   71929 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:58.381731   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:58.381798   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:58.381887   71929 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:58.381972   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:58.392916   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:58.420677   71929 start.go:296] duration metric: took 139.542186ms for postStartSetup
	I0717 01:55:58.420721   71929 fix.go:56] duration metric: took 18.181102939s for fixHost
	I0717 01:55:58.420745   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.423582   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.423961   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.423989   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.424169   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.424372   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424557   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424693   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.424859   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.425040   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.425053   71929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:58.531563   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181358.508735025
	
	I0717 01:55:58.531585   71929 fix.go:216] guest clock: 1721181358.508735025
	I0717 01:55:58.531594   71929 fix.go:229] Guest: 2024-07-17 01:55:58.508735025 +0000 UTC Remote: 2024-07-17 01:55:58.420726806 +0000 UTC m=+251.057483904 (delta=88.008219ms)
	I0717 01:55:58.531617   71929 fix.go:200] guest clock delta is within tolerance: 88.008219ms
	I0717 01:55:58.531624   71929 start.go:83] releasing machines lock for "old-k8s-version-901761", held for 18.292040224s
	I0717 01:55:58.531655   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.531981   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:58.534476   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.534967   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.534996   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.535258   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535802   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535990   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.536105   71929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:58.536183   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.536244   71929 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:58.536275   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.539139   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539401   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539534   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539560   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539768   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.539815   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539845   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539968   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540000   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.540116   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540243   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.540332   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540468   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.628291   71929 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:58.656964   71929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:58.806516   71929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:58.815051   71929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:58.815113   71929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:58.838575   71929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:58.838596   71929 start.go:495] detecting cgroup driver to use...
	I0717 01:55:58.838662   71929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:58.855728   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:58.875221   71929 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:58.875285   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:58.889781   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:58.903832   71929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:59.026815   71929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:59.173879   71929 docker.go:233] disabling docker service ...
	I0717 01:55:59.173964   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:59.192906   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:59.208262   71929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:59.368178   71929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:59.500335   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:59.514795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:59.535553   71929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:55:59.535631   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.548304   71929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:59.548376   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.563066   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.578452   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.593447   71929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:59.606239   71929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:59.617051   71929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:59.617118   71929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:59.632601   71929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:59.645034   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:59.812343   71929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:59.969366   71929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:59.969444   71929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:59.974286   71929 start.go:563] Will wait 60s for crictl version
	I0717 01:55:59.974335   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:55:59.978280   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:00.020399   71929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:00.020489   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.049811   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.081952   71929 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:55:55.703286   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.07265838s)
	I0717 01:55:55.703312   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 01:55:55.703342   71603 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:55.703396   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:56.651520   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 01:55:56.651563   71603 cache_images.go:123] Successfully loaded all cached images
	I0717 01:55:56.651569   71603 cache_images.go:92] duration metric: took 14.83641531s to LoadCachedImages
	I0717 01:55:56.651581   71603 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:55:56.651702   71603 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-391501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:56.651770   71603 ssh_runner.go:195] Run: crio config
	I0717 01:55:56.700129   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:55:56.700152   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:56.700162   71603 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:56.700189   71603 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-391501 NodeName:no-preload-391501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:56.700315   71603 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-391501"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:56.700372   71603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:55:56.711859   71603 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:56.711936   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:56.721994   71603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 01:55:56.738335   71603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:55:56.755198   71603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 01:55:56.772467   71603 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:56.777580   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:56.792767   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:56.913075   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:56.930746   71603 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501 for IP: 192.168.61.174
	I0717 01:55:56.930768   71603 certs.go:194] generating shared ca certs ...
	I0717 01:55:56.930783   71603 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:56.930929   71603 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:56.930968   71603 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:56.930978   71603 certs.go:256] generating profile certs ...
	I0717 01:55:56.931050   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/client.key
	I0717 01:55:56.931112   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key.a30174c9
	I0717 01:55:56.931153   71603 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key
	I0717 01:55:56.931292   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:56.931331   71603 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:56.931344   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:56.931373   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:56.931404   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:56.931434   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:56.931478   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:56.932180   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:56.971111   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:57.016791   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:57.049766   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:57.078139   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:55:57.109781   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:55:57.137912   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:57.165141   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:55:57.190210   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:57.214366   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:57.239518   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:57.265505   71603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:57.283773   71603 ssh_runner.go:195] Run: openssl version
	I0717 01:55:57.289846   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:57.300434   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305370   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305456   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.311765   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:57.322769   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:57.334122   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338774   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338823   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.344721   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:57.356476   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:57.368672   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374055   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374107   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.380256   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:57.392428   71603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:57.397593   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:57.404378   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:57.411094   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:57.418536   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:57.425312   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:57.431841   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:57.438615   71603 kubeadm.go:392] StartCluster: {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:57.438696   71603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:57.438782   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.482932   71603 cri.go:89] found id: ""
	I0717 01:55:57.482993   71603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:57.493813   71603 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:57.493832   71603 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:57.493872   71603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:57.504757   71603 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:57.505655   71603 kubeconfig.go:125] found "no-preload-391501" server: "https://192.168.61.174:8443"
	I0717 01:55:57.507634   71603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:57.517990   71603 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I0717 01:55:57.518025   71603 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:57.518038   71603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:57.518090   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.557504   71603 cri.go:89] found id: ""
	I0717 01:55:57.557588   71603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:57.574074   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:57.583703   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:57.583724   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:57.583768   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:55:57.593924   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:57.593992   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:57.606945   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:55:57.616803   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:57.616847   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:57.627215   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.637121   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:57.637179   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.646291   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:55:57.655314   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:57.655372   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:57.666994   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:57.677582   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:57.798148   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.316598   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.518419797s)
	I0717 01:55:59.316629   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.581666   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.675003   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.748682   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:59.748771   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:56.781465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:59.280394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:00.083384   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:56:00.086085   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086454   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:56:00.086494   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086710   71929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:00.091322   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:00.104102   71929 kubeadm.go:883] updating cluster {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:00.104237   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:56:00.104309   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:00.152445   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:00.152537   71929 ssh_runner.go:195] Run: which lz4
	I0717 01:56:00.156760   71929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:00.161123   71929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:00.161149   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:56:02.031804   71929 crio.go:462] duration metric: took 1.875087246s to copy over tarball
	I0717 01:56:02.031904   71929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:58.556014   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Start
	I0717 01:55:58.556171   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring networks are active...
	I0717 01:55:58.556866   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network default is active
	I0717 01:55:58.557237   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network mk-embed-certs-940222 is active
	I0717 01:55:58.557686   71146 main.go:141] libmachine: (embed-certs-940222) Getting domain xml...
	I0717 01:55:58.558375   71146 main.go:141] libmachine: (embed-certs-940222) Creating domain...
	I0717 01:55:59.917419   71146 main.go:141] libmachine: (embed-certs-940222) Waiting to get IP...
	I0717 01:55:59.918379   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:55:59.918849   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:55:59.918908   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:55:59.918833   73097 retry.go:31] will retry after 248.560075ms: waiting for machine to come up
	I0717 01:56:00.169337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.169877   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.169898   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.169837   73097 retry.go:31] will retry after 380.159418ms: waiting for machine to come up
	I0717 01:56:00.551472   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.552033   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.552076   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.551987   73097 retry.go:31] will retry after 439.990107ms: waiting for machine to come up
	I0717 01:56:00.993776   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.994337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.994351   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.994319   73097 retry.go:31] will retry after 415.462036ms: waiting for machine to come up
	I0717 01:56:01.412114   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:01.412508   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:01.412535   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:01.412484   73097 retry.go:31] will retry after 660.852153ms: waiting for machine to come up
	I0717 01:56:02.075095   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.075519   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.075541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.075498   73097 retry.go:31] will retry after 788.200532ms: waiting for machine to come up
	I0717 01:56:00.249300   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.749610   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.823943   71603 api_server.go:72] duration metric: took 1.075254107s to wait for apiserver process to appear ...
	I0717 01:56:00.823980   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:00.824006   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:00.825286   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:01.325032   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:01.281044   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:03.281329   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:05.092637   71929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060698331s)
	I0717 01:56:05.092674   71929 crio.go:469] duration metric: took 3.060839356s to extract the tarball
	I0717 01:56:05.092682   71929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:05.135461   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:05.170789   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:05.170814   71929 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:56:05.170853   71929 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.170884   71929 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.170908   71929 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.170961   71929 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:56:05.171077   71929 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.171126   71929 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.171138   71929 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.171462   71929 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172182   71929 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:56:05.172224   71929 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.172296   71929 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.172362   71929 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172415   71929 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.172449   71929 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.372794   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415131   71929 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:56:05.415181   71929 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415231   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.419179   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.446530   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:56:05.452583   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:56:05.485692   71929 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:56:05.485734   71929 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:56:05.485780   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.486154   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.487346   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.489408   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.490486   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:56:05.494929   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.499420   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.593505   71929 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:56:05.593587   71929 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.593638   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.632564   71929 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:56:05.632615   71929 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.632667   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657745   71929 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:56:05.657792   71929 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.657852   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657863   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:56:05.657908   71929 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:56:05.657943   71929 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.657958   71929 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:56:05.657976   71929 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.657980   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658004   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658037   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.658077   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.671679   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.671708   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.736572   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:56:05.736599   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:56:05.736671   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.758178   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:56:05.758210   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:56:05.787948   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:56:06.882199   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:07.025117   71929 cache_images.go:92] duration metric: took 1.854284265s to LoadCachedImages
	W0717 01:56:07.025227   71929 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0717 01:56:07.025245   71929 kubeadm.go:934] updating node { 192.168.50.44 8443 v1.20.0 crio true true} ...
	I0717 01:56:07.025378   71929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-901761 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:07.025465   71929 ssh_runner.go:195] Run: crio config
	I0717 01:56:07.081517   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:56:07.081543   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:07.081560   71929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:07.081584   71929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.44 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901761 NodeName:old-k8s-version-901761 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:56:07.081749   71929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-901761"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:07.081833   71929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:56:07.092233   71929 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:07.092335   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:07.102086   71929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0717 01:56:07.121538   71929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:07.139112   71929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0717 01:56:07.157397   71929 ssh_runner.go:195] Run: grep 192.168.50.44	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:07.161818   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.44	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:07.174723   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:07.307484   71929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:07.325948   71929 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761 for IP: 192.168.50.44
	I0717 01:56:07.325974   71929 certs.go:194] generating shared ca certs ...
	I0717 01:56:07.326002   71929 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.326164   71929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:07.326216   71929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:07.326229   71929 certs.go:256] generating profile certs ...
	I0717 01:56:07.326351   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.key
	I0717 01:56:07.326416   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5
	I0717 01:56:07.326461   71929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key
	I0717 01:56:07.326630   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:07.326668   71929 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:07.326681   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:07.326700   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:07.326724   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:07.326767   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:07.326828   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:07.327702   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:07.377671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:02.864980   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.865620   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.865656   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.865503   73097 retry.go:31] will retry after 1.00461953s: waiting for machine to come up
	I0717 01:56:03.871702   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:03.872187   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:03.872215   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:03.872133   73097 retry.go:31] will retry after 1.15731846s: waiting for machine to come up
	I0717 01:56:05.030767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:05.031263   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:05.031285   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:05.031209   73097 retry.go:31] will retry after 1.704165162s: waiting for machine to come up
	I0717 01:56:06.737975   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:06.738337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:06.738386   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:06.738307   73097 retry.go:31] will retry after 2.014062128s: waiting for machine to come up
	I0717 01:56:06.326066   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:06.326112   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:05.780615   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:08.281127   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:07.413171   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:07.443671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:07.482883   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:56:07.527280   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:07.571200   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:07.612296   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:07.638012   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:07.662018   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:07.688033   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:07.721827   71929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:07.741517   71929 ssh_runner.go:195] Run: openssl version
	I0717 01:56:07.747466   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:07.758615   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763382   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763439   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.769358   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:07.781802   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:07.792763   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797629   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797681   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.803879   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:07.815479   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:07.828292   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832769   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832829   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.838958   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:07.850108   71929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:07.854758   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:07.860661   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:07.866484   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:07.872302   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:07.878252   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:07.884275   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:07.890148   71929 kubeadm.go:392] StartCluster: {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:07.890264   71929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:07.890343   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:07.930081   71929 cri.go:89] found id: ""
	I0717 01:56:07.930153   71929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:07.941371   71929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:07.941396   71929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:07.941445   71929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:07.955229   71929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:07.957263   71929 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:07.959002   71929 kubeconfig.go:62] /home/jenkins/minikube-integration/19264-3908/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-901761" cluster setting kubeconfig missing "old-k8s-version-901761" context setting]
	I0717 01:56:07.960384   71929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.962748   71929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:07.973815   71929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.44
	I0717 01:56:07.973851   71929 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:07.973864   71929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:07.973933   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:08.020169   71929 cri.go:89] found id: ""
	I0717 01:56:08.020247   71929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:08.038015   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:08.049272   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:08.049294   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:08.049336   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:08.058953   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:08.059025   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:08.069034   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:08.078748   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:08.078817   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:08.089660   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.099521   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:08.099583   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.109831   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:08.120340   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:08.120400   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:08.130884   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:08.141008   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:08.275189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.006841   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.255401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.376659   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.475840   71929 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:09.475937   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:09.976926   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.476192   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.976705   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.476386   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.976459   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:08.753835   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:08.754316   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:08.754347   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:08.754264   73097 retry.go:31] will retry after 2.005810517s: waiting for machine to come up
	I0717 01:56:10.761600   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:10.762022   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:10.762053   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:10.761980   73097 retry.go:31] will retry after 2.631438855s: waiting for machine to come up
	I0717 01:56:11.327297   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:11.327348   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:10.779534   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:13.278417   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:15.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:12.476819   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:12.976633   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.476076   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.476885   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.976972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.476823   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.976917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.476765   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.976609   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.395592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:13.395949   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:13.395991   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:13.395905   73097 retry.go:31] will retry after 3.565162998s: waiting for machine to come up
	I0717 01:56:16.964948   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965424   71146 main.go:141] libmachine: (embed-certs-940222) Found IP for machine: 192.168.72.225
	I0717 01:56:16.965455   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has current primary IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965465   71146 main.go:141] libmachine: (embed-certs-940222) Reserving static IP address...
	I0717 01:56:16.966065   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.966092   71146 main.go:141] libmachine: (embed-certs-940222) DBG | skip adding static IP to network mk-embed-certs-940222 - found existing host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"}
	I0717 01:56:16.966107   71146 main.go:141] libmachine: (embed-certs-940222) Reserved static IP address: 192.168.72.225
	I0717 01:56:16.966122   71146 main.go:141] libmachine: (embed-certs-940222) Waiting for SSH to be available...
	I0717 01:56:16.966150   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Getting to WaitForSSH function...
	I0717 01:56:16.968287   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968642   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.968688   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968758   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH client type: external
	I0717 01:56:16.968782   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa (-rw-------)
	I0717 01:56:16.968842   71146 main.go:141] libmachine: (embed-certs-940222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:56:16.968872   71146 main.go:141] libmachine: (embed-certs-940222) DBG | About to run SSH command:
	I0717 01:56:16.968888   71146 main.go:141] libmachine: (embed-certs-940222) DBG | exit 0
	I0717 01:56:17.090641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | SSH cmd err, output: <nil>: 
	I0717 01:56:17.091120   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetConfigRaw
	I0717 01:56:17.091720   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.094205   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.094592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094810   71146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/config.json ...
	I0717 01:56:17.095001   71146 machine.go:94] provisionDockerMachine start ...
	I0717 01:56:17.095022   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:17.095223   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.097395   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097680   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.097707   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097848   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.098021   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098170   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098311   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.098491   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.098683   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.098695   71146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:56:17.203054   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:56:17.203080   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203364   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:56:17.203402   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.206404   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.206826   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.206868   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.207076   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.207282   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207471   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207611   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.207793   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.207985   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.207997   71146 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-940222 && echo "embed-certs-940222" | sudo tee /etc/hostname
	I0717 01:56:17.326485   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-940222
	
	I0717 01:56:17.326512   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.329226   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329629   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.329659   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329834   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.329996   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330148   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330265   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.330417   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.330619   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.330642   71146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-940222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-940222/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-940222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:56:17.439258   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:56:17.439285   71146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:56:17.439315   71146 buildroot.go:174] setting up certificates
	I0717 01:56:17.439324   71146 provision.go:84] configureAuth start
	I0717 01:56:17.439332   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.439656   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.442348   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442765   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.442796   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442976   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.445418   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.445767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.445803   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.446000   71146 provision.go:143] copyHostCerts
	I0717 01:56:17.446081   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:56:17.446098   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:56:17.446171   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:56:17.446265   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:56:17.446272   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:56:17.446292   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:56:17.446346   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:56:17.446353   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:56:17.446370   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:56:17.446418   71146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.embed-certs-940222 san=[127.0.0.1 192.168.72.225 embed-certs-940222 localhost minikube]
	I0717 01:56:17.578140   71146 provision.go:177] copyRemoteCerts
	I0717 01:56:17.578195   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:56:17.578221   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.581141   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581432   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.581457   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581697   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.581892   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.582038   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.582219   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:17.664867   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:56:17.691053   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:56:17.715816   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:56:17.742153   71146 provision.go:87] duration metric: took 302.817653ms to configureAuth
	I0717 01:56:17.742180   71146 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:56:17.742405   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:17.742486   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.745102   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745369   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.745398   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745608   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.745820   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746019   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746209   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.746510   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.746738   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.746761   71146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:56:18.017395   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:56:18.017420   71146 machine.go:97] duration metric: took 922.405002ms to provisionDockerMachine
	I0717 01:56:18.017433   71146 start.go:293] postStartSetup for "embed-certs-940222" (driver="kvm2")
	I0717 01:56:18.017449   71146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:56:18.017469   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.017817   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:56:18.017846   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.020599   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021051   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.021081   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021228   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.021410   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.021556   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.021660   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.101432   71146 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:56:18.105722   71146 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:56:18.105742   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:56:18.105797   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:56:18.105866   71146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:56:18.105944   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:56:18.115228   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:18.139857   71146 start.go:296] duration metric: took 122.411322ms for postStartSetup
	I0717 01:56:18.139924   71146 fix.go:56] duration metric: took 19.608111597s for fixHost
	I0717 01:56:18.139951   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.142466   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.142865   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.142886   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.143098   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.143262   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143444   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143662   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.143852   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:18.144022   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:18.144033   71146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:56:18.243604   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181378.218663213
	
	I0717 01:56:18.243635   71146 fix.go:216] guest clock: 1721181378.218663213
	I0717 01:56:18.243644   71146 fix.go:229] Guest: 2024-07-17 01:56:18.218663213 +0000 UTC Remote: 2024-07-17 01:56:18.139933424 +0000 UTC m=+355.354069584 (delta=78.729789ms)
	I0717 01:56:18.243662   71146 fix.go:200] guest clock delta is within tolerance: 78.729789ms
	I0717 01:56:18.243667   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 19.711916707s
	I0717 01:56:18.243684   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.243952   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:18.246454   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.246881   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.246907   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.247135   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247618   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247828   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247919   71146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:56:18.247958   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.248050   71146 ssh_runner.go:195] Run: cat /version.json
	I0717 01:56:18.248074   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.250520   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250914   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.250952   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250973   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251222   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251403   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251463   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.251495   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.251668   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251747   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.251817   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251975   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.252103   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.351600   71146 ssh_runner.go:195] Run: systemctl --version
	I0717 01:56:18.357586   71146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:56:18.503767   71146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:56:18.511637   71146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:56:18.511724   71146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:56:18.530209   71146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:56:18.530235   71146 start.go:495] detecting cgroup driver to use...
	I0717 01:56:18.530303   71146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:56:18.551740   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:56:18.566975   71146 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:56:18.567044   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:56:18.585100   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:56:18.601151   71146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:56:18.735644   71146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:56:18.895436   71146 docker.go:233] disabling docker service ...
	I0717 01:56:18.895505   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:56:18.910354   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:56:18.922999   71146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:56:19.065365   71146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:56:19.179337   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:56:19.194454   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:56:19.213281   71146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:56:19.213339   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.223531   71146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:56:19.223594   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.233691   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.243695   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.255192   71146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:56:19.266082   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.276861   71146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.295903   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.306114   71146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:56:19.316226   71146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:56:19.316275   71146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:56:19.329402   71146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:56:19.340622   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:19.456624   71146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:56:19.605945   71146 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:56:19.606051   71146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:56:19.611067   71146 start.go:563] Will wait 60s for crictl version
	I0717 01:56:19.611116   71146 ssh_runner.go:195] Run: which crictl
	I0717 01:56:19.615065   71146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:19.662925   71146 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:19.662989   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.693240   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.722332   71146 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:56:16.328318   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:16.328371   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:17.780821   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:19.780921   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:17.476562   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:17.976663   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.476958   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.976722   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.476641   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.976079   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.476899   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.976553   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.476087   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.976659   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.723930   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:19.726730   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727084   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:19.727107   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727314   71146 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:19.731814   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:19.745514   71146 kubeadm.go:883] updating cluster {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:19.745622   71146 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:56:19.745677   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:19.782922   71146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:56:19.782988   71146 ssh_runner.go:195] Run: which lz4
	I0717 01:56:19.786946   71146 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:19.791298   71146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:19.791323   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:56:21.230910   71146 crio.go:462] duration metric: took 1.443984707s to copy over tarball
	I0717 01:56:21.231003   71146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:56:21.328607   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:21.328654   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.345118   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": read tcp 192.168.61.1:36190->192.168.61.174:8443: read: connection reset by peer
	I0717 01:56:21.824753   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.825500   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:22.325079   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:22.280465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:24.779729   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:22.475994   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:22.976928   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.476906   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.975980   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.476208   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.976090   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.476425   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.976072   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.976180   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.517174   71146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.286133857s)
	I0717 01:56:23.517200   71146 crio.go:469] duration metric: took 2.286263798s to extract the tarball
	I0717 01:56:23.517210   71146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:23.554084   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:23.603831   71146 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:56:23.603861   71146 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:56:23.603871   71146 kubeadm.go:934] updating node { 192.168.72.225 8443 v1.30.2 crio true true} ...
	I0717 01:56:23.604004   71146 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-940222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:23.604087   71146 ssh_runner.go:195] Run: crio config
	I0717 01:56:23.658775   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:23.658794   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:23.658803   71146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:23.658826   71146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.225 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-940222 NodeName:embed-certs-940222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:56:23.659007   71146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-940222"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:23.659092   71146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:56:23.669971   71146 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:23.670042   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:23.680949   71146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 01:56:23.698917   71146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:23.716218   71146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 01:56:23.733971   71146 ssh_runner.go:195] Run: grep 192.168.72.225	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:23.738112   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:23.750915   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:23.894690   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:23.913418   71146 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222 for IP: 192.168.72.225
	I0717 01:56:23.913440   71146 certs.go:194] generating shared ca certs ...
	I0717 01:56:23.913456   71146 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:23.913630   71146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:23.913703   71146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:23.913729   71146 certs.go:256] generating profile certs ...
	I0717 01:56:23.913856   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/client.key
	I0717 01:56:23.913926   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key.d13a776d
	I0717 01:56:23.913968   71146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key
	I0717 01:56:23.914081   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:23.914123   71146 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:23.914134   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:23.914161   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:23.914188   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:23.914214   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:23.914256   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:23.914925   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:23.961346   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:24.006765   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:24.036852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:24.064984   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 01:56:24.090778   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:24.116146   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:24.142429   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:24.168427   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:24.193691   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:24.218852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:24.242932   71146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:24.261434   71146 ssh_runner.go:195] Run: openssl version
	I0717 01:56:24.267358   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:24.280319   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285286   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285358   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.291896   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:24.304027   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:24.315542   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320212   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320283   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.326123   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:24.339982   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:24.352301   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357023   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357078   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.363112   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:24.375910   71146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:24.380986   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:24.387276   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:24.393718   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:24.400367   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:24.406600   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:24.413161   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:24.420455   71146 kubeadm.go:392] StartCluster: {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:24.420578   71146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:24.420643   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.460702   71146 cri.go:89] found id: ""
	I0717 01:56:24.460792   71146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:24.472047   71146 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:24.472064   71146 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:24.472105   71146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:24.483092   71146 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:24.484146   71146 kubeconfig.go:125] found "embed-certs-940222" server: "https://192.168.72.225:8443"
	I0717 01:56:24.486112   71146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:24.497462   71146 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.225
	I0717 01:56:24.497496   71146 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:24.497511   71146 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:24.497571   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.541423   71146 cri.go:89] found id: ""
	I0717 01:56:24.541486   71146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:24.563272   71146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:24.574859   71146 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:24.574883   71146 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:24.574930   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:24.584960   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:24.585022   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:24.595950   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:24.605686   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:24.605775   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:24.616191   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.625954   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:24.626009   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.636254   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:24.648853   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:24.648961   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:24.660491   71146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:24.675329   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:24.795437   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:25.895383   71146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099913319s)
	I0717 01:56:25.895411   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.116274   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.286149   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.355208   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:26.355296   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.855578   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.355880   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.371616   71146 api_server.go:72] duration metric: took 1.016410291s to wait for apiserver process to appear ...
	I0717 01:56:27.371642   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:27.371671   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:27.325875   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:27.325920   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:26.780264   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.279376   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.836783   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.836811   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.836823   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.883657   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.883684   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.883695   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.895244   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.895270   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:30.371799   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.375903   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.375926   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:30.872627   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.876799   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.876830   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:31.372402   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:31.376723   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 01:56:31.382638   71146 api_server.go:141] control plane version: v1.30.2
	I0717 01:56:31.382663   71146 api_server.go:131] duration metric: took 4.011014381s to wait for apiserver health ...
	I0717 01:56:31.382672   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:31.382679   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:31.384436   71146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:27.476313   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.976700   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.476585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.976008   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.477040   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.976892   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.476912   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.976626   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.476786   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.976148   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.385974   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:31.396977   71146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:31.415740   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:31.425268   71146 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:31.425306   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:31.425313   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:56:31.425320   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:56:31.425328   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:56:31.425332   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 01:56:31.425337   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:56:31.425344   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:31.425350   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 01:56:31.425360   71146 system_pods.go:74] duration metric: took 9.598959ms to wait for pod list to return data ...
	I0717 01:56:31.425368   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:31.429053   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:31.429075   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:31.429084   71146 node_conditions.go:105] duration metric: took 3.710466ms to run NodePressure ...
	I0717 01:56:31.429098   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:31.699456   71146 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703803   71146 kubeadm.go:739] kubelet initialised
	I0717 01:56:31.703825   71146 kubeadm.go:740] duration metric: took 4.345324ms waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703835   71146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:31.708962   71146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.712850   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712871   71146 pod_ready.go:81] duration metric: took 3.888169ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.712879   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712891   71146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.717134   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717156   71146 pod_ready.go:81] duration metric: took 4.256764ms for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.717163   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717169   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.721479   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721498   71146 pod_ready.go:81] duration metric: took 4.321032ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.721508   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721515   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.819188   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819217   71146 pod_ready.go:81] duration metric: took 97.692306ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.819226   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819231   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.219730   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219766   71146 pod_ready.go:81] duration metric: took 400.526796ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.219775   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219782   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.619930   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619961   71146 pod_ready.go:81] duration metric: took 400.172543ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.619971   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619978   71146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:33.019223   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019252   71146 pod_ready.go:81] duration metric: took 399.266573ms for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:33.019263   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019271   71146 pod_ready.go:38] duration metric: took 1.315427432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:33.019291   71146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:56:33.032094   71146 ops.go:34] apiserver oom_adj: -16
	I0717 01:56:33.032116   71146 kubeadm.go:597] duration metric: took 8.56004698s to restartPrimaryControlPlane
	I0717 01:56:33.032125   71146 kubeadm.go:394] duration metric: took 8.611681052s to StartCluster
	I0717 01:56:33.032140   71146 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.032204   71146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:33.033963   71146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.034198   71146 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:56:33.034337   71146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:56:33.034405   71146 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-940222"
	I0717 01:56:33.034425   71146 addons.go:69] Setting metrics-server=true in profile "embed-certs-940222"
	I0717 01:56:33.034467   71146 addons.go:234] Setting addon metrics-server=true in "embed-certs-940222"
	W0717 01:56:33.034481   71146 addons.go:243] addon metrics-server should already be in state true
	I0717 01:56:33.034516   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034465   71146 addons.go:69] Setting default-storageclass=true in profile "embed-certs-940222"
	I0717 01:56:33.034469   71146 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-940222"
	I0717 01:56:33.034589   71146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-940222"
	W0717 01:56:33.034632   71146 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:56:33.034411   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:33.034725   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034963   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.034992   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035052   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035093   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035199   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.036051   71146 out.go:177] * Verifying Kubernetes components...
	I0717 01:56:33.037606   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:33.051343   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I0717 01:56:33.051970   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.052483   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.052516   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.052671   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0717 01:56:33.052887   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.053016   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.053397   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.053443   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.053760   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.053775   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0717 01:56:33.053779   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054125   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.054139   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.054336   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.054625   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.054656   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054984   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.055524   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.055563   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.057648   71146 addons.go:234] Setting addon default-storageclass=true in "embed-certs-940222"
	W0717 01:56:33.057668   71146 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:56:33.057699   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.058003   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.058036   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.070476   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0717 01:56:33.070717   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0717 01:56:33.071094   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071289   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071648   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071665   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.071841   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071863   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.072171   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072293   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072357   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.072581   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.073298   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0717 01:56:33.073745   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.074224   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.074237   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.074585   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.074690   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.075032   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.075054   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.075361   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.077495   71146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:56:33.077496   71146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:33.079446   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:56:33.079460   71146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:56:33.079480   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.080373   71146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.080386   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:56:33.080401   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.083272   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083527   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083623   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.083641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083899   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084099   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084168   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.084184   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.084273   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.084331   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084463   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.084748   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084890   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.085028   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.092382   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0717 01:56:33.092826   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.093401   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.093418   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.094409   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.094576   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.096442   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.096730   71146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.096750   71146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:56:33.096768   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.099802   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100290   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.100368   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100472   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.100625   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.100760   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.100849   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.229494   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:33.246459   71146 node_ready.go:35] waiting up to 6m0s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:33.400804   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:56:33.400824   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:56:33.411866   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.413220   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.426485   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:56:33.426506   71146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:56:33.476707   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:33.476729   71146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:56:33.539095   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:34.542027   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130125192s)
	I0717 01:56:34.542089   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542102   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542103   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.128853338s)
	I0717 01:56:34.542139   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542151   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542420   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542442   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542442   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542447   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542450   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542468   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542474   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542483   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542505   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542517   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542711   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542727   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542715   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542835   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542847   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.549135   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.549160   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.549405   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.549428   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616065   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076933862s)
	I0717 01:56:34.616127   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616142   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616429   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616479   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616489   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.616499   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616541   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616784   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616800   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616810   71146 addons.go:475] Verifying addon metrics-server=true in "embed-certs-940222"
	I0717 01:56:34.619698   71146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:56:32.326261   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:32.326310   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:31.779064   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:33.780671   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:32.475986   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:32.976812   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.476601   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.976667   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.476897   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.976610   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.476444   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.976859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.476092   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.976979   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.620987   71146 addons.go:510] duration metric: took 1.586659462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:56:35.250360   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.251933   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.326685   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:37.326726   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:39.977828   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:39.977860   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:39.977877   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.002499   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:40.002532   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:36.280516   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:38.779351   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:40.324290   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.329888   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.329914   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:40.824413   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.831375   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.831407   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:41.324677   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:41.333259   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 01:56:41.341378   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:56:41.341426   71603 api_server.go:131] duration metric: took 40.517438405s to wait for apiserver health ...
	I0717 01:56:41.341438   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:56:41.341447   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:41.343489   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:37.476813   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:37.976779   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.476554   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.976791   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.476946   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.976044   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.476526   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.976315   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.476688   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.976203   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.750483   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:40.249907   71146 node_ready.go:49] node "embed-certs-940222" has status "Ready":"True"
	I0717 01:56:40.249934   71146 node_ready.go:38] duration metric: took 7.003442258s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:40.249945   71146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:40.255811   71146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762773   71146 pod_ready.go:92] pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:40.762795   71146 pod_ready.go:81] duration metric: took 506.956885ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762806   71146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:42.768945   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:41.344846   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:41.360339   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:41.385845   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:41.409812   71603 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:41.409843   71603 system_pods.go:61] "coredns-5cfdc65f69-ztqz8" [7c9caec8-56b6-4faa-9410-0528f108696c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:41.409849   71603 system_pods.go:61] "etcd-no-preload-391501" [603f01a1-2b07-4d1d-be14-4da4a9f1e1b2] Running
	I0717 01:56:41.409854   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [7733c5b6-5e30-472b-920d-3849f2849f7b] Running
	I0717 01:56:41.409860   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [c1afab7e-9b46-4940-94ec-e62ebc10f406] Running
	I0717 01:56:41.409865   71603 system_pods.go:61] "kube-proxy-zbqhw" [26056c12-35cd-4a3e-b40a-1eca055bd1e2] Running
	I0717 01:56:41.409869   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [98f81994-9d2a-45b8-9719-90e181ee5d6f] Running
	I0717 01:56:41.409877   71603 system_pods.go:61] "metrics-server-78fcd8795b-g9x96" [86a6a2c3-ae04-486d-9751-0cc801f9fbfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:41.409887   71603 system_pods.go:61] "storage-provisioner" [8b938905-d8e1-4129-8426-5e31a05d38db] Running
	I0717 01:56:41.409895   71603 system_pods.go:74] duration metric: took 24.018074ms to wait for pod list to return data ...
	I0717 01:56:41.409906   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:41.418825   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:41.418856   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:41.418868   71603 node_conditions.go:105] duration metric: took 8.953821ms to run NodePressure ...
	I0717 01:56:41.418892   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:41.713730   71603 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:41.719162   71603 retry.go:31] will retry after 180.435127ms: kubelet not initialised
	I0717 01:56:41.906299   71603 retry.go:31] will retry after 320.946038ms: kubelet not initialised
	I0717 01:56:42.232875   71603 retry.go:31] will retry after 423.072333ms: kubelet not initialised
	I0717 01:56:42.661412   71603 retry.go:31] will retry after 1.138026932s: kubelet not initialised
	I0717 01:56:43.809525   71603 retry.go:31] will retry after 1.187704503s: kubelet not initialised
	I0717 01:56:45.009815   71603 kubeadm.go:739] kubelet initialised
	I0717 01:56:45.009839   71603 kubeadm.go:740] duration metric: took 3.296082732s waiting for restarted kubelet to initialise ...
	I0717 01:56:45.009850   71603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:45.021149   71603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.780159   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:43.279699   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:45.280407   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:42.476301   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:42.976939   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.477021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.976910   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.476766   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.976415   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.476987   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.976666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.476735   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.976643   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.770078   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.269496   71146 pod_ready.go:92] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.269524   71146 pod_ready.go:81] duration metric: took 6.506711113s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.269538   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277267   71146 pod_ready.go:92] pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.277294   71146 pod_ready.go:81] duration metric: took 7.747271ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277309   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286697   71146 pod_ready.go:92] pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.286715   71146 pod_ready.go:81] duration metric: took 9.397698ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286723   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291876   71146 pod_ready.go:92] pod "kube-proxy-l58xk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.291897   71146 pod_ready.go:81] duration metric: took 5.168432ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291905   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296201   71146 pod_ready.go:92] pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.296215   71146 pod_ready.go:81] duration metric: took 4.304055ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296222   71146 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.027495   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:49.028127   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.779497   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:50.279065   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.476576   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:47.976502   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.476634   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.976299   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.476069   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.976086   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.476859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.976441   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.476217   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.976585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.303729   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.802778   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.029194   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:53.528363   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.778915   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:54.780173   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.476652   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:52.976136   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.976168   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.476176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.976049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.476464   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.976802   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.308491   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:56.802797   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:55.528547   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.533612   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.030406   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.278908   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:59.279393   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.476661   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:57.976021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.976940   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.476773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.976397   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.476591   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.976189   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.476917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.976263   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.806045   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.807112   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.529203   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.028677   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:01.779903   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:03.780163   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.476048   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:02.976019   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.476604   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.976602   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.477004   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.976726   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.476934   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.975985   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.476331   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.976185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.302031   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.303601   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.803763   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.528021   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:09.528499   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.780204   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:08.279630   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.476887   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:07.975972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.476034   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.976678   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:09.476927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:09.477010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:09.513328   71929 cri.go:89] found id: ""
	I0717 01:57:09.513352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.513361   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:09.513368   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:09.513418   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:09.551203   71929 cri.go:89] found id: ""
	I0717 01:57:09.551228   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.551237   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:09.551244   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:09.551308   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:09.585321   71929 cri.go:89] found id: ""
	I0717 01:57:09.585352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.585363   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:09.585370   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:09.585427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:09.623977   71929 cri.go:89] found id: ""
	I0717 01:57:09.624004   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.624012   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:09.624019   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:09.624078   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:09.663338   71929 cri.go:89] found id: ""
	I0717 01:57:09.663367   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.663374   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:09.663380   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:09.663425   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:09.696381   71929 cri.go:89] found id: ""
	I0717 01:57:09.696412   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.696423   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:09.696436   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:09.696482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:09.735892   71929 cri.go:89] found id: ""
	I0717 01:57:09.735922   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.735932   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:09.735944   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:09.736006   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:09.775878   71929 cri.go:89] found id: ""
	I0717 01:57:09.775909   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.775919   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:09.775929   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:09.775942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:09.830021   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:09.830057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:09.844753   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:09.844783   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:09.985140   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:09.985165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:09.985179   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:10.049946   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:10.049984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:10.310038   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.805565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:11.529122   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:14.028939   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:10.779935   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:13.278388   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:15.280027   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.592959   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:12.608385   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:12.608467   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:12.649900   71929 cri.go:89] found id: ""
	I0717 01:57:12.649931   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.649942   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:12.649950   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:12.650021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:12.684915   71929 cri.go:89] found id: ""
	I0717 01:57:12.684941   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.684948   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:12.684956   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:12.685010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:12.727718   71929 cri.go:89] found id: ""
	I0717 01:57:12.727758   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.727766   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:12.727788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:12.727864   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:12.767212   71929 cri.go:89] found id: ""
	I0717 01:57:12.767236   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.767244   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:12.767249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:12.767295   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:12.806301   71929 cri.go:89] found id: ""
	I0717 01:57:12.806320   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.806327   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:12.806332   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:12.806405   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:12.843118   71929 cri.go:89] found id: ""
	I0717 01:57:12.843151   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.843162   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:12.843170   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:12.843245   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:12.876671   71929 cri.go:89] found id: ""
	I0717 01:57:12.876697   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.876707   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:12.876714   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:12.876790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:12.916201   71929 cri.go:89] found id: ""
	I0717 01:57:12.916226   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.916232   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:12.916240   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:12.916250   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:12.970346   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:12.970385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:12.985029   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:12.985053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:13.068314   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:13.068340   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:13.068352   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:13.147862   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:13.147897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:15.703130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:15.717081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:15.717160   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:15.757513   71929 cri.go:89] found id: ""
	I0717 01:57:15.757538   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.757545   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:15.757552   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:15.757599   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:15.794185   71929 cri.go:89] found id: ""
	I0717 01:57:15.794218   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.794231   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:15.794238   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:15.794300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:15.830589   71929 cri.go:89] found id: ""
	I0717 01:57:15.830619   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.830628   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:15.830634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:15.830694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:15.869673   71929 cri.go:89] found id: ""
	I0717 01:57:15.869702   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.869713   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:15.869720   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:15.869782   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:15.909225   71929 cri.go:89] found id: ""
	I0717 01:57:15.909257   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.909267   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:15.909278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:15.909343   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:15.944389   71929 cri.go:89] found id: ""
	I0717 01:57:15.944417   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.944424   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:15.944430   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:15.944490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:15.982871   71929 cri.go:89] found id: ""
	I0717 01:57:15.982898   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.982907   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:15.982915   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:15.982983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:16.025674   71929 cri.go:89] found id: ""
	I0717 01:57:16.025701   71929 logs.go:276] 0 containers: []
	W0717 01:57:16.025711   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:16.025721   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:16.025736   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:16.111608   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:16.111627   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:16.111638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:16.184650   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:16.184689   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:16.230647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:16.230693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:16.286675   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:16.286710   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:15.303141   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.304891   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:16.029794   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.529463   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.780034   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:20.279882   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.802487   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:18.817483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:18.817562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:18.861623   71929 cri.go:89] found id: ""
	I0717 01:57:18.861653   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.861664   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:18.861671   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:18.861733   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:18.901335   71929 cri.go:89] found id: ""
	I0717 01:57:18.901359   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.901367   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:18.901372   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:18.901427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:18.936477   71929 cri.go:89] found id: ""
	I0717 01:57:18.936508   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.936518   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:18.936524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:18.936581   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:18.971056   71929 cri.go:89] found id: ""
	I0717 01:57:18.971087   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.971098   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:18.971106   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:18.971157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:19.005399   71929 cri.go:89] found id: ""
	I0717 01:57:19.005431   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.005453   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:19.005460   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:19.005525   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:19.040218   71929 cri.go:89] found id: ""
	I0717 01:57:19.040242   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.040250   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:19.040257   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:19.040317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:19.073365   71929 cri.go:89] found id: ""
	I0717 01:57:19.073392   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.073402   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:19.073409   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:19.073471   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:19.108670   71929 cri.go:89] found id: ""
	I0717 01:57:19.108701   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.108713   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:19.108725   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:19.108743   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:19.186077   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:19.186111   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.232181   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:19.232214   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:19.288713   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:19.288755   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:19.303089   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:19.303115   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:19.386372   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:21.886666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:21.900905   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:21.900966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:21.934955   71929 cri.go:89] found id: ""
	I0717 01:57:21.934979   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.934987   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:21.934993   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:21.935036   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:21.972180   71929 cri.go:89] found id: ""
	I0717 01:57:21.972203   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.972211   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:21.972217   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:21.972271   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:22.010452   71929 cri.go:89] found id: ""
	I0717 01:57:22.010479   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.010487   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:22.010493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:22.010547   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:22.045824   71929 cri.go:89] found id: ""
	I0717 01:57:22.045888   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.045902   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:22.045911   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:22.045984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:22.084734   71929 cri.go:89] found id: ""
	I0717 01:57:22.084760   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.084769   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:22.084774   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:22.084842   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:22.119808   71929 cri.go:89] found id: ""
	I0717 01:57:22.119838   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.119846   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:22.119852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:22.119910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:22.157537   71929 cri.go:89] found id: ""
	I0717 01:57:22.157583   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.157610   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:22.157620   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:22.157687   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:22.196021   71929 cri.go:89] found id: ""
	I0717 01:57:22.196052   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.196062   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:22.196079   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:22.196094   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:22.274350   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:22.274373   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:22.274386   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:22.364363   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:22.364401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.803506   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.306698   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:21.028767   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:23.527943   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:24.529027   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.529064   71603 pod_ready.go:81] duration metric: took 39.50788355s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.529078   71603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534655   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.534680   71603 pod_ready.go:81] duration metric: took 5.594492ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534691   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539602   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.539622   71603 pod_ready.go:81] duration metric: took 4.923891ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539631   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544475   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.544516   71603 pod_ready.go:81] duration metric: took 4.862078ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544532   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549173   71603 pod_ready.go:92] pod "kube-proxy-zbqhw" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.549193   71603 pod_ready.go:81] duration metric: took 4.653986ms for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549203   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925916   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.925944   71603 pod_ready.go:81] duration metric: took 376.73343ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925959   71603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:22.779802   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:25.280281   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.410052   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:22.410092   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:22.462289   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:22.462326   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.978560   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:24.992533   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:24.992601   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:25.027708   71929 cri.go:89] found id: ""
	I0717 01:57:25.027746   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.027754   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:25.027760   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:25.027809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:25.066946   71929 cri.go:89] found id: ""
	I0717 01:57:25.066974   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.066985   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:25.066992   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:25.067051   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:25.107209   71929 cri.go:89] found id: ""
	I0717 01:57:25.107238   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.107248   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:25.107254   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:25.107300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:25.141548   71929 cri.go:89] found id: ""
	I0717 01:57:25.141577   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.141587   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:25.141594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:25.141652   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:25.175822   71929 cri.go:89] found id: ""
	I0717 01:57:25.175853   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.175861   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:25.175866   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:25.175917   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:25.215672   71929 cri.go:89] found id: ""
	I0717 01:57:25.215705   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.215718   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:25.215726   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:25.215786   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:25.260392   71929 cri.go:89] found id: ""
	I0717 01:57:25.260422   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.260434   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:25.260442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:25.260510   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:25.309953   71929 cri.go:89] found id: ""
	I0717 01:57:25.309981   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.309990   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:25.309999   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:25.310013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:25.414204   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:25.414229   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:25.414244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:25.501849   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:25.501883   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:25.545129   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:25.545163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:25.599948   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:25.599984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.803870   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.302993   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:26.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.932999   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.280455   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:29.778817   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.115776   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:28.129710   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:28.129776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:28.165380   71929 cri.go:89] found id: ""
	I0717 01:57:28.165409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.165419   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:28.165425   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:28.165473   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:28.199225   71929 cri.go:89] found id: ""
	I0717 01:57:28.199251   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.199259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:28.199264   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:28.199314   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:28.235564   71929 cri.go:89] found id: ""
	I0717 01:57:28.235585   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.235593   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:28.235598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:28.235649   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:28.270377   71929 cri.go:89] found id: ""
	I0717 01:57:28.270409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.270427   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:28.270435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:28.270488   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:28.310132   71929 cri.go:89] found id: ""
	I0717 01:57:28.310156   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.310163   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:28.310168   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:28.310222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:28.347590   71929 cri.go:89] found id: ""
	I0717 01:57:28.347619   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.347630   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:28.347638   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:28.347696   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:28.387953   71929 cri.go:89] found id: ""
	I0717 01:57:28.387988   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.388001   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:28.388010   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:28.388072   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:28.428788   71929 cri.go:89] found id: ""
	I0717 01:57:28.428811   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.428818   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:28.428826   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:28.428838   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:28.487411   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:28.487465   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:28.501121   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:28.501152   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:28.576296   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:28.576320   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:28.576335   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:28.660246   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:28.660288   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:31.201238   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:31.221132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:31.221192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:31.279839   71929 cri.go:89] found id: ""
	I0717 01:57:31.279867   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.279876   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:31.279884   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:31.279943   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:31.359764   71929 cri.go:89] found id: ""
	I0717 01:57:31.359796   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.359807   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:31.359814   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:31.359873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:31.397045   71929 cri.go:89] found id: ""
	I0717 01:57:31.397077   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.397087   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:31.397094   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:31.397157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:31.441356   71929 cri.go:89] found id: ""
	I0717 01:57:31.441388   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.441397   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:31.441404   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:31.441459   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:31.484014   71929 cri.go:89] found id: ""
	I0717 01:57:31.484040   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.484053   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:31.484060   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:31.484124   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:31.520686   71929 cri.go:89] found id: ""
	I0717 01:57:31.520714   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.520725   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:31.520733   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:31.520792   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:31.557300   71929 cri.go:89] found id: ""
	I0717 01:57:31.557326   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.557334   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:31.557339   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:31.557387   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:31.597753   71929 cri.go:89] found id: ""
	I0717 01:57:31.597782   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.597792   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:31.597804   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:31.597818   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:31.656796   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:31.656837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:31.671287   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:31.671311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:31.742752   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:31.742772   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:31.742784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:31.828154   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:31.828186   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:29.303279   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.303332   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.434410   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.778853   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.780535   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:34.368947   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:34.384323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:34.384402   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:34.421138   71929 cri.go:89] found id: ""
	I0717 01:57:34.421171   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.421182   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:34.421190   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:34.421263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:34.459077   71929 cri.go:89] found id: ""
	I0717 01:57:34.459105   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.459116   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:34.459123   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:34.459180   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:34.492987   71929 cri.go:89] found id: ""
	I0717 01:57:34.493016   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.493027   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:34.493038   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:34.493098   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:34.527801   71929 cri.go:89] found id: ""
	I0717 01:57:34.527827   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.527836   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:34.527841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:34.527890   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:34.562877   71929 cri.go:89] found id: ""
	I0717 01:57:34.562904   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.562914   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:34.562921   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:34.562981   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:34.599387   71929 cri.go:89] found id: ""
	I0717 01:57:34.599409   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.599417   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:34.599423   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:34.599479   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:34.636087   71929 cri.go:89] found id: ""
	I0717 01:57:34.636118   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.636126   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:34.636132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:34.636194   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:34.673168   71929 cri.go:89] found id: ""
	I0717 01:57:34.673196   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.673206   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:34.673214   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:34.673226   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:34.712833   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:34.712864   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:34.765926   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:34.765959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:34.780024   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:34.780049   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:34.863080   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:34.863106   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:34.863122   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:33.803621   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.306114   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:35.933050   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.432520   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.280143   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.779168   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:37.446644   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:37.463015   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:37.463090   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:37.499563   71929 cri.go:89] found id: ""
	I0717 01:57:37.499592   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.499601   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:37.499607   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:37.499663   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:37.538516   71929 cri.go:89] found id: ""
	I0717 01:57:37.538543   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.538572   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:37.538579   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:37.538638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:37.577032   71929 cri.go:89] found id: ""
	I0717 01:57:37.577061   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.577068   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:37.577074   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:37.577129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:37.613534   71929 cri.go:89] found id: ""
	I0717 01:57:37.613563   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.613574   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:37.613582   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:37.613646   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:37.651346   71929 cri.go:89] found id: ""
	I0717 01:57:37.651370   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.651381   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:37.651389   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:37.651451   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:37.685949   71929 cri.go:89] found id: ""
	I0717 01:57:37.685989   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.686001   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:37.686008   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:37.686068   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:37.721706   71929 cri.go:89] found id: ""
	I0717 01:57:37.721744   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.721752   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:37.721759   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:37.721812   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:37.758948   71929 cri.go:89] found id: ""
	I0717 01:57:37.758976   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.758985   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:37.758994   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:37.759005   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:37.835305   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:37.835334   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:37.835349   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:37.916627   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:37.916660   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:37.956819   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:37.956851   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.007596   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:38.007641   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.522573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:40.536850   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:40.536924   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:40.576172   71929 cri.go:89] found id: ""
	I0717 01:57:40.576200   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.576211   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:40.576218   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:40.576277   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:40.611926   71929 cri.go:89] found id: ""
	I0717 01:57:40.611958   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.611969   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:40.611976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:40.612039   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:40.647225   71929 cri.go:89] found id: ""
	I0717 01:57:40.647251   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.647259   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:40.647265   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:40.647315   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:40.683871   71929 cri.go:89] found id: ""
	I0717 01:57:40.683902   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.683917   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:40.683925   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:40.683999   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:40.720941   71929 cri.go:89] found id: ""
	I0717 01:57:40.720971   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.720982   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:40.720989   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:40.721053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:40.756695   71929 cri.go:89] found id: ""
	I0717 01:57:40.756728   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.756739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:40.756746   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:40.756801   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:40.794181   71929 cri.go:89] found id: ""
	I0717 01:57:40.794214   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.794221   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:40.794226   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:40.794281   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:40.830361   71929 cri.go:89] found id: ""
	I0717 01:57:40.830396   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.830407   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:40.830417   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:40.830436   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.844827   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:40.844849   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:40.913003   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:40.913021   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:40.913035   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:40.996314   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:40.996348   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:41.041120   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:41.041151   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.801850   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.802727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:42.802814   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.934130   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.432799   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.780350   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.279971   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.593226   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:43.606395   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:43.606461   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:43.646260   71929 cri.go:89] found id: ""
	I0717 01:57:43.646290   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.646302   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:43.646310   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:43.646368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:43.681148   71929 cri.go:89] found id: ""
	I0717 01:57:43.681174   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.681182   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:43.681189   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:43.681250   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:43.716568   71929 cri.go:89] found id: ""
	I0717 01:57:43.716595   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.716606   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:43.716613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:43.716675   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:43.750507   71929 cri.go:89] found id: ""
	I0717 01:57:43.750536   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.750558   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:43.750566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:43.750627   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:43.787207   71929 cri.go:89] found id: ""
	I0717 01:57:43.787234   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.787244   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:43.787251   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:43.787311   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:43.822997   71929 cri.go:89] found id: ""
	I0717 01:57:43.823034   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.823045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:43.823052   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:43.823118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:43.860605   71929 cri.go:89] found id: ""
	I0717 01:57:43.860632   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.860640   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:43.860646   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:43.860702   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:43.897419   71929 cri.go:89] found id: ""
	I0717 01:57:43.897451   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.897463   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:43.897473   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:43.897492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:43.956361   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:43.956393   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:43.971077   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:43.971104   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:44.045234   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:44.045258   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:44.045275   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:44.122508   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:44.122544   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:46.660516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:46.675555   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:46.675651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:46.709264   71929 cri.go:89] found id: ""
	I0717 01:57:46.709291   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.709300   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:46.709306   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:46.709362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:46.744865   71929 cri.go:89] found id: ""
	I0717 01:57:46.744898   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.744908   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:46.744915   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:46.744971   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:46.785837   71929 cri.go:89] found id: ""
	I0717 01:57:46.785860   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.785870   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:46.785878   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:46.785932   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:46.828801   71929 cri.go:89] found id: ""
	I0717 01:57:46.828832   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.828842   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:46.828849   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:46.828907   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:46.863122   71929 cri.go:89] found id: ""
	I0717 01:57:46.863151   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.863162   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:46.863175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:46.863232   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:46.900705   71929 cri.go:89] found id: ""
	I0717 01:57:46.900731   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.900739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:46.900744   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:46.900790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:46.935774   71929 cri.go:89] found id: ""
	I0717 01:57:46.935816   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.935829   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:46.935840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:46.935895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:46.969274   71929 cri.go:89] found id: ""
	I0717 01:57:46.969304   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.969315   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:46.969325   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:46.969339   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:47.040318   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:47.040343   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:47.040358   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:47.119920   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:47.119954   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:47.168818   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:47.168847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:47.221983   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:47.222034   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:45.303812   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.304051   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.433020   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.932755   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.936075   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.780328   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.781850   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.736564   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:49.749966   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:49.750025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:49.788294   71929 cri.go:89] found id: ""
	I0717 01:57:49.788321   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.788332   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:49.788339   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:49.788396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:49.826406   71929 cri.go:89] found id: ""
	I0717 01:57:49.826431   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.826440   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:49.826445   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:49.826491   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:49.864978   71929 cri.go:89] found id: ""
	I0717 01:57:49.865005   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.865015   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:49.865020   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:49.865074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:49.901238   71929 cri.go:89] found id: ""
	I0717 01:57:49.901270   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.901281   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:49.901300   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:49.901366   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:49.937035   71929 cri.go:89] found id: ""
	I0717 01:57:49.937058   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.937065   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:49.937070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:49.937207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:49.977793   71929 cri.go:89] found id: ""
	I0717 01:57:49.977816   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.977823   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:49.977828   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:49.977873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:50.012915   71929 cri.go:89] found id: ""
	I0717 01:57:50.012942   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.012952   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:50.012959   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:50.013025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:50.049085   71929 cri.go:89] found id: ""
	I0717 01:57:50.049115   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.049127   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:50.049138   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:50.049156   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:50.087521   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:50.087549   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:50.140934   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:50.140978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:50.156001   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:50.156033   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:50.231780   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:50.231811   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:50.231835   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:49.802916   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:51.803036   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.432307   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.432384   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.278585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.279641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.810064   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:52.823442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:52.823508   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:52.860753   71929 cri.go:89] found id: ""
	I0717 01:57:52.860778   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.860789   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:52.860797   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:52.860852   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:52.896264   71929 cri.go:89] found id: ""
	I0717 01:57:52.896289   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.896297   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:52.896303   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:52.896349   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:52.932613   71929 cri.go:89] found id: ""
	I0717 01:57:52.932640   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.932649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:52.932657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:52.932722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:52.969691   71929 cri.go:89] found id: ""
	I0717 01:57:52.969720   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.969728   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:52.969734   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:52.969788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:53.007039   71929 cri.go:89] found id: ""
	I0717 01:57:53.007067   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.007075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:53.007081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:53.007135   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:53.047736   71929 cri.go:89] found id: ""
	I0717 01:57:53.047762   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.047772   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:53.047778   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:53.047838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:53.083192   71929 cri.go:89] found id: ""
	I0717 01:57:53.083216   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.083225   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:53.083230   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:53.083276   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:53.118509   71929 cri.go:89] found id: ""
	I0717 01:57:53.118536   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.118545   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:53.118564   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:53.118589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:53.203003   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:53.203039   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:53.244602   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:53.244627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:53.295180   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:53.295216   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:53.310777   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:53.310805   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:53.389412   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:55.890450   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:55.903768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:55.903843   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:55.944148   71929 cri.go:89] found id: ""
	I0717 01:57:55.944171   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.944179   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:55.944185   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:55.944231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:55.979945   71929 cri.go:89] found id: ""
	I0717 01:57:55.979970   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.979980   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:55.979987   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:55.980045   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:56.019057   71929 cri.go:89] found id: ""
	I0717 01:57:56.019089   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.019100   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:56.019107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:56.019162   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:56.054343   71929 cri.go:89] found id: ""
	I0717 01:57:56.054369   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.054378   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:56.054383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:56.054434   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:56.091150   71929 cri.go:89] found id: ""
	I0717 01:57:56.091179   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.091189   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:56.091197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:56.091256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:56.127502   71929 cri.go:89] found id: ""
	I0717 01:57:56.127528   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.127538   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:56.127547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:56.127602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:56.167935   71929 cri.go:89] found id: ""
	I0717 01:57:56.167961   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.167972   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:56.167979   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:56.168048   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:56.209501   71929 cri.go:89] found id: ""
	I0717 01:57:56.209527   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.209537   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:56.209547   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:56.209561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:56.257989   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:56.258023   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:56.272491   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:56.272519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:56.361622   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:56.361653   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:56.361668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:56.442953   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:56.442992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:54.302376   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.303297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.933123   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.933242   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.280399   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.779285   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.983914   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:58.997215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:58.997292   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:59.032937   71929 cri.go:89] found id: ""
	I0717 01:57:59.032964   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.032980   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:59.032996   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:59.033057   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:59.067790   71929 cri.go:89] found id: ""
	I0717 01:57:59.067811   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.067819   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:59.067825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:59.067881   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:59.107659   71929 cri.go:89] found id: ""
	I0717 01:57:59.107689   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.107699   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:59.107705   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:59.107754   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:59.150134   71929 cri.go:89] found id: ""
	I0717 01:57:59.150158   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.150168   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:59.150175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:59.150235   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:59.192351   71929 cri.go:89] found id: ""
	I0717 01:57:59.192381   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.192391   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:59.192398   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:59.192460   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:59.228177   71929 cri.go:89] found id: ""
	I0717 01:57:59.228202   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.228209   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:59.228215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:59.228261   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:59.267016   71929 cri.go:89] found id: ""
	I0717 01:57:59.267043   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.267052   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:59.267058   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:59.267109   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:59.302235   71929 cri.go:89] found id: ""
	I0717 01:57:59.302257   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.302263   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:59.302273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:59.302285   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:59.368453   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:59.368492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:59.383375   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:59.383399   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:59.454946   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:59.454975   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:59.454992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:59.539576   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:59.539609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:02.085516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:02.099848   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:02.099909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:02.136835   71929 cri.go:89] found id: ""
	I0717 01:58:02.136859   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.136867   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:02.136872   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:02.136928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:02.175304   71929 cri.go:89] found id: ""
	I0717 01:58:02.175331   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.175338   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:02.175344   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:02.175389   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:02.210922   71929 cri.go:89] found id: ""
	I0717 01:58:02.210947   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.210955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:02.210961   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:02.211018   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:02.246952   71929 cri.go:89] found id: ""
	I0717 01:58:02.246983   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.246992   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:02.246999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:02.247053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:02.284857   71929 cri.go:89] found id: ""
	I0717 01:58:02.284883   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.284892   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:02.284897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:02.284944   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:02.322941   71929 cri.go:89] found id: ""
	I0717 01:58:02.322978   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.322999   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:02.323007   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:02.323065   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:02.357904   71929 cri.go:89] found id: ""
	I0717 01:58:02.357932   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.357943   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:02.357950   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:02.358012   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:02.392291   71929 cri.go:89] found id: ""
	I0717 01:58:02.392315   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.392322   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:02.392331   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:02.392346   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:58.802622   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.303663   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.433212   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:03.433962   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:00.779479   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.779619   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.279590   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.447670   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:02.447704   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:02.462259   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:02.462284   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:02.534304   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:02.534332   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:02.534347   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:02.612757   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:02.612799   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.153573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:05.166702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:05.166775   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:05.205213   71929 cri.go:89] found id: ""
	I0717 01:58:05.205238   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.205247   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:05.205252   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:05.205305   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:05.242021   71929 cri.go:89] found id: ""
	I0717 01:58:05.242048   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.242057   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:05.242063   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:05.242118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:05.281862   71929 cri.go:89] found id: ""
	I0717 01:58:05.281889   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.281900   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:05.281908   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:05.281967   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:05.318125   71929 cri.go:89] found id: ""
	I0717 01:58:05.318157   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.318169   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:05.318177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:05.318244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:05.352470   71929 cri.go:89] found id: ""
	I0717 01:58:05.352504   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.352516   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:05.352524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:05.352595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:05.386692   71929 cri.go:89] found id: ""
	I0717 01:58:05.386722   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.386733   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:05.386741   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:05.386803   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:05.426676   71929 cri.go:89] found id: ""
	I0717 01:58:05.426731   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.426744   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:05.426751   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:05.426811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:05.467974   71929 cri.go:89] found id: ""
	I0717 01:58:05.468000   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.468010   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:05.468020   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:05.468036   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.506769   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:05.506797   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:05.561745   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:05.561782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:05.576743   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:05.576775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:05.652856   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:05.652887   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:05.652903   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:03.304109   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.803632   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.434411   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.931796   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.932902   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.779196   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.779591   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:08.244185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:08.257343   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:08.257420   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:08.297136   71929 cri.go:89] found id: ""
	I0717 01:58:08.297163   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.297174   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:08.297181   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:08.297237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:08.336099   71929 cri.go:89] found id: ""
	I0717 01:58:08.336121   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.336129   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:08.336135   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:08.336185   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:08.369668   71929 cri.go:89] found id: ""
	I0717 01:58:08.369690   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.369698   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:08.369706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:08.369756   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:08.405140   71929 cri.go:89] found id: ""
	I0717 01:58:08.405171   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.405179   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:08.405186   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:08.405249   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:08.446296   71929 cri.go:89] found id: ""
	I0717 01:58:08.446319   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.446326   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:08.446331   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:08.446377   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:08.483004   71929 cri.go:89] found id: ""
	I0717 01:58:08.483042   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.483062   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:08.483070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:08.483139   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:08.520668   71929 cri.go:89] found id: ""
	I0717 01:58:08.520699   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.520710   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:08.520717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:08.520776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:08.554711   71929 cri.go:89] found id: ""
	I0717 01:58:08.554734   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.554744   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:08.554752   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:08.554763   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:08.606972   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:08.607004   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:08.621102   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:08.621134   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:08.690424   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:08.690443   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:08.690454   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:08.775151   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:08.775193   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:11.318471   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:11.331875   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:11.331954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:11.375766   71929 cri.go:89] found id: ""
	I0717 01:58:11.375787   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.375795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:11.375801   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:11.375863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:11.417043   71929 cri.go:89] found id: ""
	I0717 01:58:11.417080   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.417103   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:11.417111   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:11.417169   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:11.459462   71929 cri.go:89] found id: ""
	I0717 01:58:11.459487   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.459495   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:11.459500   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:11.459551   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:11.516500   71929 cri.go:89] found id: ""
	I0717 01:58:11.516525   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.516533   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:11.516539   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:11.516590   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:11.573916   71929 cri.go:89] found id: ""
	I0717 01:58:11.573961   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.575159   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:11.575201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:11.575275   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:11.619446   71929 cri.go:89] found id: ""
	I0717 01:58:11.619477   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.619489   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:11.619497   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:11.619558   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:11.654766   71929 cri.go:89] found id: ""
	I0717 01:58:11.654793   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.654802   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:11.654807   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:11.654859   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:11.690306   71929 cri.go:89] found id: ""
	I0717 01:58:11.690335   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.690346   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:11.690354   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:11.690366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:11.744470   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:11.744516   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:11.758824   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:11.758856   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:11.841028   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:11.841058   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:11.841076   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:11.923299   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:11.923351   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:08.303010   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:10.303678   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.803090   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:11.933148   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.433109   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.280292   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.281580   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.466666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:14.479676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:14.479740   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:14.517890   71929 cri.go:89] found id: ""
	I0717 01:58:14.517919   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.517931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:14.517938   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:14.517998   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:14.552891   71929 cri.go:89] found id: ""
	I0717 01:58:14.552918   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.552926   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:14.552931   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:14.552992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:14.593571   71929 cri.go:89] found id: ""
	I0717 01:58:14.593596   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.593604   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:14.593609   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:14.593662   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:14.628869   71929 cri.go:89] found id: ""
	I0717 01:58:14.628897   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.628907   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:14.628913   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:14.628972   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:14.663558   71929 cri.go:89] found id: ""
	I0717 01:58:14.663586   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.663593   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:14.663599   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:14.663644   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:14.700788   71929 cri.go:89] found id: ""
	I0717 01:58:14.700824   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.700834   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:14.700843   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:14.700903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:14.737975   71929 cri.go:89] found id: ""
	I0717 01:58:14.738014   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.738025   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:14.738032   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:14.738091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:14.775419   71929 cri.go:89] found id: ""
	I0717 01:58:14.775443   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.775453   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:14.775465   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:14.775479   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:14.817635   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:14.817661   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:14.870667   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:14.870705   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:14.885208   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:14.885235   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:14.962286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:14.962318   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:14.962334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:14.803624   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.303944   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.434108   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.934577   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.779538   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.780694   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.537546   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:17.550258   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:17.550322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:17.586251   71929 cri.go:89] found id: ""
	I0717 01:58:17.586278   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.586286   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:17.586292   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:17.586348   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:17.620903   71929 cri.go:89] found id: ""
	I0717 01:58:17.620927   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.620935   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:17.620941   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:17.620992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:17.659292   71929 cri.go:89] found id: ""
	I0717 01:58:17.659319   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.659328   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:17.659334   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:17.659384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:17.695603   71929 cri.go:89] found id: ""
	I0717 01:58:17.695632   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.695642   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:17.695650   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:17.695711   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:17.731943   71929 cri.go:89] found id: ""
	I0717 01:58:17.731970   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.731978   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:17.731984   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:17.732041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:17.767257   71929 cri.go:89] found id: ""
	I0717 01:58:17.767284   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.767293   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:17.767299   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:17.767357   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:17.802455   71929 cri.go:89] found id: ""
	I0717 01:58:17.802495   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.802508   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:17.802516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:17.802602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:17.839321   71929 cri.go:89] found id: ""
	I0717 01:58:17.839351   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.839362   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:17.839374   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:17.839391   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:17.912269   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:17.912295   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:17.912311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:17.990005   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:17.990038   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:18.029933   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:18.029960   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:18.081941   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:18.081977   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:20.597325   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:20.611835   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:20.611901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:20.647899   71929 cri.go:89] found id: ""
	I0717 01:58:20.647922   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.647931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:20.647936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:20.647984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:20.683783   71929 cri.go:89] found id: ""
	I0717 01:58:20.683816   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.683827   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:20.683834   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:20.683892   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:20.721803   71929 cri.go:89] found id: ""
	I0717 01:58:20.721833   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.721844   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:20.721851   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:20.721910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:20.756148   71929 cri.go:89] found id: ""
	I0717 01:58:20.756177   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.756189   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:20.756196   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:20.756259   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:20.795976   71929 cri.go:89] found id: ""
	I0717 01:58:20.796014   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.796028   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:20.796036   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:20.796095   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:20.833775   71929 cri.go:89] found id: ""
	I0717 01:58:20.833805   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.833816   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:20.833824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:20.833891   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:20.869138   71929 cri.go:89] found id: ""
	I0717 01:58:20.869163   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.869173   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:20.869180   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:20.869237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:20.904865   71929 cri.go:89] found id: ""
	I0717 01:58:20.904893   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.904901   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:20.904910   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:20.904920   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:20.947268   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:20.947294   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:20.998541   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:20.998582   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:21.013797   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:21.013828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:21.085101   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:21.085127   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:21.085141   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:19.804949   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:22.304273   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.436176   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.933548   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.279177   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.279599   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:25.279899   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.667361   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:23.681768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:23.681828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:23.717721   71929 cri.go:89] found id: ""
	I0717 01:58:23.717748   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.717757   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:23.717763   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:23.717827   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:23.752699   71929 cri.go:89] found id: ""
	I0717 01:58:23.752728   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.752738   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:23.752745   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:23.752809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:23.790914   71929 cri.go:89] found id: ""
	I0717 01:58:23.790944   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.790955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:23.790962   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:23.791021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:23.827253   71929 cri.go:89] found id: ""
	I0717 01:58:23.827276   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.827285   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:23.827338   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:23.827392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:23.864466   71929 cri.go:89] found id: ""
	I0717 01:58:23.864510   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.864520   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:23.864527   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:23.864577   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:23.900734   71929 cri.go:89] found id: ""
	I0717 01:58:23.900775   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.900786   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:23.900794   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:23.900855   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:23.937212   71929 cri.go:89] found id: ""
	I0717 01:58:23.937236   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.937243   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:23.937249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:23.937304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:23.973730   71929 cri.go:89] found id: ""
	I0717 01:58:23.973755   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.973764   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:23.973774   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:23.973786   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:24.026122   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:24.026163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:24.040755   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:24.040784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:24.112224   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:24.112254   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:24.112277   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:24.195247   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:24.195281   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:26.738042   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:26.751545   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:26.751602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:26.786778   71929 cri.go:89] found id: ""
	I0717 01:58:26.786813   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.786824   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:26.786831   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:26.786889   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:26.828776   71929 cri.go:89] found id: ""
	I0717 01:58:26.828806   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.828818   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:26.828825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:26.828887   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:26.868439   71929 cri.go:89] found id: ""
	I0717 01:58:26.868468   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.868479   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:26.868486   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:26.868546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:26.900249   71929 cri.go:89] found id: ""
	I0717 01:58:26.900282   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.900292   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:26.900297   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:26.900344   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:26.933763   71929 cri.go:89] found id: ""
	I0717 01:58:26.933798   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.933808   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:26.933816   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:26.933882   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:26.968681   71929 cri.go:89] found id: ""
	I0717 01:58:26.968712   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.968722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:26.968729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:26.968788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:27.002081   71929 cri.go:89] found id: ""
	I0717 01:58:27.002113   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.002128   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:27.002135   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:27.002196   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:27.035138   71929 cri.go:89] found id: ""
	I0717 01:58:27.035161   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.035170   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:27.035177   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:27.035189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:27.091207   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:27.091244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:27.105765   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:27.105793   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:27.175533   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:27.175563   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:27.175580   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:27.260903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:27.260951   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:24.802002   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.803330   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.432259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:28.433226   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:27.280206   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.781139   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.802451   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:29.816503   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:29.816573   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:29.854887   71929 cri.go:89] found id: ""
	I0717 01:58:29.854921   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.854931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:29.854936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:29.854983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:29.887529   71929 cri.go:89] found id: ""
	I0717 01:58:29.887559   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.887570   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:29.887577   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:29.887638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:29.924995   71929 cri.go:89] found id: ""
	I0717 01:58:29.925020   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.925028   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:29.925034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:29.925091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:29.960064   71929 cri.go:89] found id: ""
	I0717 01:58:29.960092   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.960104   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:29.960111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:29.960178   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:29.995408   71929 cri.go:89] found id: ""
	I0717 01:58:29.995431   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.995438   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:29.995443   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:29.995494   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:30.028219   71929 cri.go:89] found id: ""
	I0717 01:58:30.028247   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.028254   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:30.028260   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:30.028309   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:30.062529   71929 cri.go:89] found id: ""
	I0717 01:58:30.062576   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.062589   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:30.062597   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:30.062664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:30.095854   71929 cri.go:89] found id: ""
	I0717 01:58:30.095882   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.095893   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:30.095904   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:30.095919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:30.148083   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:30.148114   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:30.161861   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:30.161892   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:30.236474   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:30.236503   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:30.236519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:30.319691   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:30.319720   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:28.804656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:31.302637   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:30.932659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.934225   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.279141   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:34.279312   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.867821   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:32.881480   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:32.881541   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:32.918289   71929 cri.go:89] found id: ""
	I0717 01:58:32.918316   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.918327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:32.918335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:32.918396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:32.955383   71929 cri.go:89] found id: ""
	I0717 01:58:32.955417   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.955426   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:32.955433   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:32.955498   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:32.990432   71929 cri.go:89] found id: ""
	I0717 01:58:32.990460   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.990467   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:32.990472   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:32.990531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:33.034653   71929 cri.go:89] found id: ""
	I0717 01:58:33.034685   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.034697   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:33.034703   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:33.034763   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:33.077875   71929 cri.go:89] found id: ""
	I0717 01:58:33.077911   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.077919   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:33.077926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:33.077988   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:33.114800   71929 cri.go:89] found id: ""
	I0717 01:58:33.114840   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.114852   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:33.114864   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:33.114946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:33.151095   71929 cri.go:89] found id: ""
	I0717 01:58:33.151229   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.151242   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:33.151249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:33.151324   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:33.190100   71929 cri.go:89] found id: ""
	I0717 01:58:33.190128   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.190138   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:33.190149   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:33.190163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:33.271195   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:33.271231   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:33.317539   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:33.317569   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.370188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:33.370224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:33.385016   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:33.385045   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:33.460017   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:35.960499   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:35.974504   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:35.974583   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:36.008652   71929 cri.go:89] found id: ""
	I0717 01:58:36.008696   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.008704   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:36.008710   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:36.008770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:36.044068   71929 cri.go:89] found id: ""
	I0717 01:58:36.044097   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.044106   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:36.044113   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:36.044174   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:36.083572   71929 cri.go:89] found id: ""
	I0717 01:58:36.083602   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.083610   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:36.083616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:36.083682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:36.116716   71929 cri.go:89] found id: ""
	I0717 01:58:36.116744   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.116753   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:36.116761   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:36.116820   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:36.156042   71929 cri.go:89] found id: ""
	I0717 01:58:36.156069   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.156080   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:36.156087   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:36.156148   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:36.192005   71929 cri.go:89] found id: ""
	I0717 01:58:36.192033   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.192045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:36.192055   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:36.192116   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:36.228720   71929 cri.go:89] found id: ""
	I0717 01:58:36.228751   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.228763   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:36.228769   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:36.228817   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:36.263835   71929 cri.go:89] found id: ""
	I0717 01:58:36.263862   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.263872   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:36.263882   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:36.263897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:36.278545   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:36.278609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:36.361182   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:36.361208   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:36.361225   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:36.447797   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:36.447832   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:36.492167   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:36.492196   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.304750   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.803867   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.432659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:37.433360   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.433481   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:36.282525   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:38.779592   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.045613   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:39.058615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:39.058688   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:39.094625   71929 cri.go:89] found id: ""
	I0717 01:58:39.094672   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.094684   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:39.094692   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:39.094755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:39.132856   71929 cri.go:89] found id: ""
	I0717 01:58:39.132887   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.132898   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:39.132905   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:39.132966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:39.171017   71929 cri.go:89] found id: ""
	I0717 01:58:39.171037   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.171044   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:39.171051   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:39.171112   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:39.210146   71929 cri.go:89] found id: ""
	I0717 01:58:39.210176   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.210186   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:39.210193   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:39.210269   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:39.244307   71929 cri.go:89] found id: ""
	I0717 01:58:39.244332   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.244342   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:39.244349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:39.244411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:39.279649   71929 cri.go:89] found id: ""
	I0717 01:58:39.279675   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.279682   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:39.279688   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:39.279755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:39.317699   71929 cri.go:89] found id: ""
	I0717 01:58:39.317726   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.317735   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:39.317742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:39.317789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:39.352319   71929 cri.go:89] found id: ""
	I0717 01:58:39.352351   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.352365   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:39.352377   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:39.352392   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:39.404153   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:39.404188   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:39.419796   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:39.419828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:39.495463   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:39.495485   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:39.495499   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:39.576742   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:39.576795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.132481   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:42.145588   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:42.145658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:42.181231   71929 cri.go:89] found id: ""
	I0717 01:58:42.181257   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.181265   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:42.181270   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:42.181321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:42.216876   71929 cri.go:89] found id: ""
	I0717 01:58:42.216905   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.216917   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:42.216923   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:42.216984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:42.256918   71929 cri.go:89] found id: ""
	I0717 01:58:42.256948   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.256959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:42.256967   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:42.257022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:42.291930   71929 cri.go:89] found id: ""
	I0717 01:58:42.291957   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.291964   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:42.291975   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:42.292035   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:42.329927   71929 cri.go:89] found id: ""
	I0717 01:58:42.329954   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.329964   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:42.329970   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:42.330014   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:42.364041   71929 cri.go:89] found id: ""
	I0717 01:58:42.364072   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.364085   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:42.364093   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:42.364150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:38.302060   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.302711   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.303560   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:41.437100   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.932845   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.780109   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.280118   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.400751   71929 cri.go:89] found id: ""
	I0717 01:58:42.400775   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.400784   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:42.400790   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:42.400840   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:42.438200   71929 cri.go:89] found id: ""
	I0717 01:58:42.438228   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.438240   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:42.438251   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:42.438265   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:42.455268   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:42.455303   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:42.537344   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:42.537368   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:42.537381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:42.618487   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:42.618522   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.661273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:42.661299   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:45.212631   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:45.226247   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:45.226330   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:45.263067   71929 cri.go:89] found id: ""
	I0717 01:58:45.263098   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.263110   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:45.263117   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:45.263177   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:45.299025   71929 cri.go:89] found id: ""
	I0717 01:58:45.299056   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.299067   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:45.299074   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:45.299137   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:45.346828   71929 cri.go:89] found id: ""
	I0717 01:58:45.346858   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.346868   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:45.346876   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:45.346938   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:45.390879   71929 cri.go:89] found id: ""
	I0717 01:58:45.390905   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.390913   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:45.390918   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:45.390966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:45.426794   71929 cri.go:89] found id: ""
	I0717 01:58:45.426823   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.426834   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:45.426841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:45.426902   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:45.463834   71929 cri.go:89] found id: ""
	I0717 01:58:45.463863   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.463873   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:45.463880   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:45.463942   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:45.500660   71929 cri.go:89] found id: ""
	I0717 01:58:45.500689   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.500701   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:45.500708   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:45.500766   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:45.537332   71929 cri.go:89] found id: ""
	I0717 01:58:45.537356   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.537364   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:45.537373   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:45.537388   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:45.551194   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:45.551222   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:45.623863   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:45.623892   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:45.623906   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:45.699740   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:45.699782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:45.739580   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:45.739613   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:44.803138   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:47.302471   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:46.434311   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.933004   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:45.779778   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.279595   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.300789   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:48.315608   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:48.315667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:48.353050   71929 cri.go:89] found id: ""
	I0717 01:58:48.353076   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.353084   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:48.353089   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:48.353133   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:48.394789   71929 cri.go:89] found id: ""
	I0717 01:58:48.394817   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.394829   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:48.394837   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:48.394900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:48.433430   71929 cri.go:89] found id: ""
	I0717 01:58:48.433457   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.433468   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:48.433475   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:48.433530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:48.467215   71929 cri.go:89] found id: ""
	I0717 01:58:48.467243   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.467254   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:48.467262   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:48.467318   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:48.501087   71929 cri.go:89] found id: ""
	I0717 01:58:48.501120   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.501131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:48.501138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:48.501204   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:48.538648   71929 cri.go:89] found id: ""
	I0717 01:58:48.538683   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.538696   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:48.538706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:48.538762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:48.573006   71929 cri.go:89] found id: ""
	I0717 01:58:48.573030   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.573040   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:48.573047   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:48.573106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:48.608779   71929 cri.go:89] found id: ""
	I0717 01:58:48.608803   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.608813   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:48.608824   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:48.608837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:48.659250   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:48.659290   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:48.673418   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:48.673449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:48.748175   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:48.748196   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:48.748207   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:48.824238   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:48.824274   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:51.367155   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:51.382458   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:51.382527   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:51.424005   71929 cri.go:89] found id: ""
	I0717 01:58:51.424040   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.424051   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:51.424059   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:51.424117   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:51.463318   71929 cri.go:89] found id: ""
	I0717 01:58:51.463348   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.463357   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:51.463363   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:51.463414   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:51.502261   71929 cri.go:89] found id: ""
	I0717 01:58:51.502290   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.502301   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:51.502309   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:51.502362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:51.536277   71929 cri.go:89] found id: ""
	I0717 01:58:51.536308   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.536319   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:51.536327   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:51.536392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:51.580598   71929 cri.go:89] found id: ""
	I0717 01:58:51.580629   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.580640   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:51.580648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:51.580726   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:51.618666   71929 cri.go:89] found id: ""
	I0717 01:58:51.618690   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.618697   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:51.618702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:51.618747   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:51.654742   71929 cri.go:89] found id: ""
	I0717 01:58:51.654777   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.654790   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:51.654799   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:51.654863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:51.698006   71929 cri.go:89] found id: ""
	I0717 01:58:51.698034   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.698043   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:51.698051   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:51.698062   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:51.754812   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:51.754852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:51.771887   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:51.771919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:51.859627   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:51.859657   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:51.859675   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:51.946633   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:51.946673   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:49.302540   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.803884   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.433981   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.933306   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:50.781428   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.279780   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:54.494188   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:54.509111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:54.509190   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:54.546424   71929 cri.go:89] found id: ""
	I0717 01:58:54.546454   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.546464   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:54.546471   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:54.546532   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:54.586811   71929 cri.go:89] found id: ""
	I0717 01:58:54.586841   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.586853   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:54.586860   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:54.586918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:54.627350   71929 cri.go:89] found id: ""
	I0717 01:58:54.627375   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.627383   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:54.627388   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:54.627438   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:54.665901   71929 cri.go:89] found id: ""
	I0717 01:58:54.665941   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.665954   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:54.665974   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:54.666041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:54.702921   71929 cri.go:89] found id: ""
	I0717 01:58:54.702948   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.702958   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:54.702965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:54.703027   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:54.737378   71929 cri.go:89] found id: ""
	I0717 01:58:54.737406   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.737414   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:54.737421   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:54.737469   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:54.771924   71929 cri.go:89] found id: ""
	I0717 01:58:54.771954   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.771964   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:54.771971   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:54.772055   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:54.812939   71929 cri.go:89] found id: ""
	I0717 01:58:54.812972   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.812983   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:54.812995   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:54.813010   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:54.862979   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:54.863013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:54.877467   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:54.877504   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:54.953924   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:54.953950   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:54.953963   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:55.032019   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:55.032052   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:54.302727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:56.311656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.933968   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:58.432611   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.778263   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.781311   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.278937   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.573130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:57.591689   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:57.591762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:57.626444   71929 cri.go:89] found id: ""
	I0717 01:58:57.626469   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.626479   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:57.626486   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:57.626570   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:57.661280   71929 cri.go:89] found id: ""
	I0717 01:58:57.661305   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.661314   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:57.661321   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:57.661376   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:57.695678   71929 cri.go:89] found id: ""
	I0717 01:58:57.695703   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.695711   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:57.695717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:57.695762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:57.729705   71929 cri.go:89] found id: ""
	I0717 01:58:57.729734   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.729742   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:57.729748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:57.729804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:57.763338   71929 cri.go:89] found id: ""
	I0717 01:58:57.763365   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.763373   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:57.763387   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:57.763433   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:57.800576   71929 cri.go:89] found id: ""
	I0717 01:58:57.800600   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.800608   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:57.800615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:57.800701   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:57.842401   71929 cri.go:89] found id: ""
	I0717 01:58:57.842428   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.842439   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:57.842446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:57.842503   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:57.880355   71929 cri.go:89] found id: ""
	I0717 01:58:57.880379   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.880387   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:57.880395   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:57.880412   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:57.938215   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:57.938252   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:57.952835   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:57.952876   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:58.027203   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:58.027231   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:58.027246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:58.108442   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:58.108483   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:00.648580   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:00.662596   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:00.662667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:00.696315   71929 cri.go:89] found id: ""
	I0717 01:59:00.696342   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.696351   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:00.696356   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:00.696411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:00.732117   71929 cri.go:89] found id: ""
	I0717 01:59:00.732147   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.732158   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:00.732164   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:00.732212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:00.768747   71929 cri.go:89] found id: ""
	I0717 01:59:00.768779   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.768790   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:00.768797   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:00.768856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:00.807557   71929 cri.go:89] found id: ""
	I0717 01:59:00.807585   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.807592   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:00.807598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:00.807651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:00.844127   71929 cri.go:89] found id: ""
	I0717 01:59:00.844152   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.844161   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:00.844166   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:00.844222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:00.879565   71929 cri.go:89] found id: ""
	I0717 01:59:00.879590   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.879597   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:00.879613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:00.879684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:00.917352   71929 cri.go:89] found id: ""
	I0717 01:59:00.917379   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.917387   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:00.917393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:00.917440   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:00.952603   71929 cri.go:89] found id: ""
	I0717 01:59:00.952630   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.952637   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:00.952647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:00.952688   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:01.007203   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:01.007242   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:01.021476   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:01.021512   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:01.102283   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:01.102306   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:01.102320   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:01.175736   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:01.175771   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:58.803034   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.803718   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.932781   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.433188   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:02.281269   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:04.779257   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.717612   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:03.732446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:03.732511   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:03.767485   71929 cri.go:89] found id: ""
	I0717 01:59:03.767519   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.767533   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:03.767542   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:03.767607   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:03.803961   71929 cri.go:89] found id: ""
	I0717 01:59:03.803989   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.804000   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:03.804007   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:03.804074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:03.842734   71929 cri.go:89] found id: ""
	I0717 01:59:03.842768   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.842780   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:03.842788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:03.842915   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:03.883571   71929 cri.go:89] found id: ""
	I0717 01:59:03.883598   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.883608   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:03.883616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:03.883682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:03.922037   71929 cri.go:89] found id: ""
	I0717 01:59:03.922065   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.922076   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:03.922084   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:03.922143   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:03.961135   71929 cri.go:89] found id: ""
	I0717 01:59:03.961165   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.961176   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:03.961183   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:03.961244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:03.995542   71929 cri.go:89] found id: ""
	I0717 01:59:03.995570   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.995580   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:03.995589   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:03.995647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:04.030142   71929 cri.go:89] found id: ""
	I0717 01:59:04.030170   71929 logs.go:276] 0 containers: []
	W0717 01:59:04.030178   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:04.030187   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:04.030198   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:04.110329   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:04.110366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:04.152194   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:04.152224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:04.204012   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:04.204048   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:04.218261   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:04.218291   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:04.290786   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:06.791166   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:06.806662   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:06.806722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:06.841447   71929 cri.go:89] found id: ""
	I0717 01:59:06.841476   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.841486   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:06.841494   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:06.841554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:06.879920   71929 cri.go:89] found id: ""
	I0717 01:59:06.879956   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.879971   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:06.879976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:06.880033   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:06.914436   71929 cri.go:89] found id: ""
	I0717 01:59:06.914465   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.914476   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:06.914484   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:06.914566   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:06.952098   71929 cri.go:89] found id: ""
	I0717 01:59:06.952127   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.952135   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:06.952141   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:06.952187   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:06.988054   71929 cri.go:89] found id: ""
	I0717 01:59:06.988085   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.988096   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:06.988103   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:06.988168   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:07.026633   71929 cri.go:89] found id: ""
	I0717 01:59:07.026658   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.026670   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:07.026676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:07.026732   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:07.064433   71929 cri.go:89] found id: ""
	I0717 01:59:07.064454   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.064463   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:07.064468   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:07.064545   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:07.108352   71929 cri.go:89] found id: ""
	I0717 01:59:07.108385   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.108396   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:07.108410   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:07.108428   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:07.163554   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:07.163593   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:07.177221   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:07.177249   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:07.249712   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:07.249735   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:07.249748   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:07.333011   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:07.333044   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:03.303048   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.304001   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.314317   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.932370   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.933031   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.933728   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:06.780342   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.279683   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.873187   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:09.887579   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:09.887658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:09.923675   71929 cri.go:89] found id: ""
	I0717 01:59:09.923706   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.923716   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:09.923724   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:09.923789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:09.961248   71929 cri.go:89] found id: ""
	I0717 01:59:09.961278   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.961288   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:09.961296   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:09.961354   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:10.000069   71929 cri.go:89] found id: ""
	I0717 01:59:10.000094   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.000101   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:10.000107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:10.000157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:10.036784   71929 cri.go:89] found id: ""
	I0717 01:59:10.036808   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.036815   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:10.036820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:10.036869   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:10.072746   71929 cri.go:89] found id: ""
	I0717 01:59:10.072778   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.072789   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:10.072796   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:10.072856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:10.109520   71929 cri.go:89] found id: ""
	I0717 01:59:10.109544   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.109552   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:10.109557   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:10.109608   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:10.142521   71929 cri.go:89] found id: ""
	I0717 01:59:10.142565   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.142576   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:10.142584   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:10.142647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:10.175772   71929 cri.go:89] found id: ""
	I0717 01:59:10.175800   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.175812   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:10.175823   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:10.175837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:10.213534   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:10.213561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:10.266449   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:10.266485   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:10.282204   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:10.282234   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:10.353974   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:10.354004   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:10.354017   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:09.802047   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.802200   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.433722   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:14.932285   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.780394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:13.781691   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.936509   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:12.951547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:12.951616   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:12.987833   71929 cri.go:89] found id: ""
	I0717 01:59:12.987860   71929 logs.go:276] 0 containers: []
	W0717 01:59:12.987868   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:12.987873   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:12.987922   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:13.026500   71929 cri.go:89] found id: ""
	I0717 01:59:13.026529   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.026539   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:13.026546   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:13.026625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:13.061631   71929 cri.go:89] found id: ""
	I0717 01:59:13.061664   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.061674   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:13.061682   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:13.061745   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:13.099449   71929 cri.go:89] found id: ""
	I0717 01:59:13.099476   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.099487   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:13.099494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:13.099565   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:13.137271   71929 cri.go:89] found id: ""
	I0717 01:59:13.137299   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.137309   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:13.137317   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:13.137384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:13.174432   71929 cri.go:89] found id: ""
	I0717 01:59:13.174462   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.174472   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:13.174478   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:13.174539   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:13.212820   71929 cri.go:89] found id: ""
	I0717 01:59:13.212845   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.212855   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:13.212865   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:13.212930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:13.245961   71929 cri.go:89] found id: ""
	I0717 01:59:13.245993   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.246004   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:13.246014   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:13.246028   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:13.284801   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:13.284828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.338476   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:13.338511   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:13.352751   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:13.352777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:13.434001   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:13.434035   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:13.434050   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.022525   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:16.036863   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:16.036941   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:16.074370   71929 cri.go:89] found id: ""
	I0717 01:59:16.074398   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.074409   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:16.074416   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:16.074476   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:16.112239   71929 cri.go:89] found id: ""
	I0717 01:59:16.112267   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.112276   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:16.112281   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:16.112329   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:16.147398   71929 cri.go:89] found id: ""
	I0717 01:59:16.147422   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.147429   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:16.147435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:16.147490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:16.182112   71929 cri.go:89] found id: ""
	I0717 01:59:16.182141   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.182149   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:16.182155   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:16.182203   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:16.219134   71929 cri.go:89] found id: ""
	I0717 01:59:16.219163   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.219174   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:16.219182   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:16.219238   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:16.255892   71929 cri.go:89] found id: ""
	I0717 01:59:16.255924   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.255934   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:16.255943   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:16.256003   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:16.291202   71929 cri.go:89] found id: ""
	I0717 01:59:16.291228   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.291238   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:16.291245   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:16.291304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:16.330748   71929 cri.go:89] found id: ""
	I0717 01:59:16.330779   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.330790   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:16.330801   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:16.330815   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:16.344628   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:16.344668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:16.415735   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:16.415761   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:16.415775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.499411   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:16.499449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:16.541244   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:16.541270   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.802477   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.311229   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.933493   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.934299   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.279421   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.778998   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:19.095060   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:19.107920   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:19.107976   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:19.143446   71929 cri.go:89] found id: ""
	I0717 01:59:19.143476   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.143485   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:19.143490   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:19.143550   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:19.179216   71929 cri.go:89] found id: ""
	I0717 01:59:19.179247   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.179259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:19.179266   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:19.179317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:19.212468   71929 cri.go:89] found id: ""
	I0717 01:59:19.212498   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.212508   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:19.212516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:19.212574   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:19.245019   71929 cri.go:89] found id: ""
	I0717 01:59:19.245047   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.245058   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:19.245065   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:19.245123   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:19.278430   71929 cri.go:89] found id: ""
	I0717 01:59:19.278457   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.278467   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:19.278474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:19.278530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:19.317685   71929 cri.go:89] found id: ""
	I0717 01:59:19.317714   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.317722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:19.317729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:19.317783   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:19.352938   71929 cri.go:89] found id: ""
	I0717 01:59:19.352974   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.352986   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:19.353000   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:19.353052   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:19.387238   71929 cri.go:89] found id: ""
	I0717 01:59:19.387272   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.387283   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:19.387295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:19.387314   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:19.440138   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:19.440171   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:19.456372   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:19.456402   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:19.527881   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:19.527906   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:19.527921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:19.611903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:19.611937   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:22.160422   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:22.172802   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:22.172862   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:22.209283   71929 cri.go:89] found id: ""
	I0717 01:59:22.209315   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.209327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:22.209335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:22.209396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:22.243927   71929 cri.go:89] found id: ""
	I0717 01:59:22.243955   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.243965   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:22.243972   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:22.244022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:22.276730   71929 cri.go:89] found id: ""
	I0717 01:59:22.276754   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.276761   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:22.276767   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:22.276814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:22.319378   71929 cri.go:89] found id: ""
	I0717 01:59:22.319407   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.319418   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:22.319425   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:22.319482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:22.358272   71929 cri.go:89] found id: ""
	I0717 01:59:22.358298   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.358307   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:22.358312   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:22.358362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:22.395358   71929 cri.go:89] found id: ""
	I0717 01:59:22.395393   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.395405   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:22.395414   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:22.395477   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:18.802881   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.303532   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.433636   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.932345   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.279596   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.279700   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.280649   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:22.435158   71929 cri.go:89] found id: ""
	I0717 01:59:22.435184   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.435194   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:22.435201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:22.435248   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:22.471553   71929 cri.go:89] found id: ""
	I0717 01:59:22.471588   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.471595   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:22.471604   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:22.471616   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:22.523133   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:22.523169   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:22.539212   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:22.539246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:22.615707   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:22.615729   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:22.615744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:22.696758   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:22.696795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:25.238496   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:25.252882   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:25.252946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:25.290173   71929 cri.go:89] found id: ""
	I0717 01:59:25.290197   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.290205   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:25.290210   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:25.290263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:25.325926   71929 cri.go:89] found id: ""
	I0717 01:59:25.325968   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.325979   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:25.325985   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:25.326032   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:25.358310   71929 cri.go:89] found id: ""
	I0717 01:59:25.358362   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.358371   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:25.358377   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:25.358426   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:25.393575   71929 cri.go:89] found id: ""
	I0717 01:59:25.393605   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.393615   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:25.393622   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:25.393684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:25.429357   71929 cri.go:89] found id: ""
	I0717 01:59:25.429448   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.429466   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:25.429474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:25.429546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:25.466992   71929 cri.go:89] found id: ""
	I0717 01:59:25.467020   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.467028   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:25.467034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:25.467080   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:25.503545   71929 cri.go:89] found id: ""
	I0717 01:59:25.503575   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.503587   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:25.503594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:25.503643   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:25.542957   71929 cri.go:89] found id: ""
	I0717 01:59:25.542983   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.542993   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:25.543003   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:25.543015   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:25.598813   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:25.598852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:25.618060   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:25.618098   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:25.690079   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:25.690105   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:25.690119   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:25.765956   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:25.765994   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:23.803366   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.804525   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.932447   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.933276   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.933461   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.286160   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.781318   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:28.311715   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:28.325493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:28.325554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:28.365783   71929 cri.go:89] found id: ""
	I0717 01:59:28.365810   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.365821   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:28.365829   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:28.365885   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:28.401847   71929 cri.go:89] found id: ""
	I0717 01:59:28.401875   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.401883   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:28.401895   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:28.401954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:28.442236   71929 cri.go:89] found id: ""
	I0717 01:59:28.442261   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.442272   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:28.442278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:28.442340   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:28.476832   71929 cri.go:89] found id: ""
	I0717 01:59:28.476857   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.476866   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:28.476873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:28.476928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:28.512040   71929 cri.go:89] found id: ""
	I0717 01:59:28.512068   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.512075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:28.512081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:28.512136   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:28.547516   71929 cri.go:89] found id: ""
	I0717 01:59:28.547547   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.547558   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:28.547566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:28.547625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:28.580380   71929 cri.go:89] found id: ""
	I0717 01:59:28.580406   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.580417   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:28.580427   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:28.580485   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:28.616029   71929 cri.go:89] found id: ""
	I0717 01:59:28.616059   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.616069   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:28.616080   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:28.616095   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:28.670188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:28.670230   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:28.687315   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:28.687355   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:28.763591   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:28.763612   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:28.763627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:28.848925   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:28.848959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:31.388294   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:31.404748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:31.404814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:31.446437   71929 cri.go:89] found id: ""
	I0717 01:59:31.446468   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.446478   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:31.446484   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:31.446531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:31.487797   71929 cri.go:89] found id: ""
	I0717 01:59:31.487828   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.487840   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:31.487847   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:31.487895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:31.525327   71929 cri.go:89] found id: ""
	I0717 01:59:31.525354   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.525368   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:31.525375   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:31.525436   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:31.564106   71929 cri.go:89] found id: ""
	I0717 01:59:31.564154   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.564166   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:31.564173   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:31.564234   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:31.603345   71929 cri.go:89] found id: ""
	I0717 01:59:31.603374   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.603385   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:31.603393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:31.603456   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:31.641727   71929 cri.go:89] found id: ""
	I0717 01:59:31.641753   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.641769   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:31.641776   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:31.641837   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:31.680825   71929 cri.go:89] found id: ""
	I0717 01:59:31.680856   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.680866   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:31.680873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:31.680930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:31.714325   71929 cri.go:89] found id: ""
	I0717 01:59:31.714348   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.714355   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:31.714363   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:31.714374   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:31.765899   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:31.765934   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:31.781417   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:31.781447   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:31.857586   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:31.857607   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:31.857622   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:31.937171   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:31.937197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:28.304014   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:30.802684   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:32.803604   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.933945   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.435259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.785641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.279814   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.478176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:34.492153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:34.492223   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:34.526959   71929 cri.go:89] found id: ""
	I0717 01:59:34.526984   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.526998   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:34.527006   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:34.527064   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:34.564485   71929 cri.go:89] found id: ""
	I0717 01:59:34.564534   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.564546   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:34.564591   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:34.564706   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:34.604611   71929 cri.go:89] found id: ""
	I0717 01:59:34.604637   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.604649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:34.604657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:34.604718   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:34.640851   71929 cri.go:89] found id: ""
	I0717 01:59:34.640882   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.640892   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:34.640897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:34.640956   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:34.675828   71929 cri.go:89] found id: ""
	I0717 01:59:34.675856   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.675863   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:34.675869   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:34.675918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:34.710468   71929 cri.go:89] found id: ""
	I0717 01:59:34.710496   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.710506   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:34.710514   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:34.710595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:34.749218   71929 cri.go:89] found id: ""
	I0717 01:59:34.749249   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.749260   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:34.749267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:34.749328   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:34.784934   71929 cri.go:89] found id: ""
	I0717 01:59:34.784969   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.784979   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:34.784990   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:34.785006   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:34.799836   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:34.799870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:34.870218   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:34.870239   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:34.870254   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:34.948782   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:34.948817   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:34.992295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:34.992324   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:34.803649   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.304530   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.933199   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:39.432504   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.280185   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:38.280499   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.545759   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:37.559648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:37.559724   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:37.596642   71929 cri.go:89] found id: ""
	I0717 01:59:37.596696   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.596707   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:37.596715   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:37.596770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:37.637251   71929 cri.go:89] found id: ""
	I0717 01:59:37.637283   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.637312   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:37.637318   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:37.637372   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:37.672811   71929 cri.go:89] found id: ""
	I0717 01:59:37.672839   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.672847   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:37.672852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:37.672909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:37.706864   71929 cri.go:89] found id: ""
	I0717 01:59:37.706904   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.706916   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:37.706923   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:37.706975   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:37.747539   71929 cri.go:89] found id: ""
	I0717 01:59:37.747567   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.747576   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:37.747581   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:37.747630   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:37.785229   71929 cri.go:89] found id: ""
	I0717 01:59:37.785251   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.785260   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:37.785268   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:37.785333   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:37.840428   71929 cri.go:89] found id: ""
	I0717 01:59:37.840460   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.840471   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:37.840477   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:37.840533   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:37.876888   71929 cri.go:89] found id: ""
	I0717 01:59:37.876916   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.876924   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:37.876932   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:37.876942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:37.926161   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:37.926197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:37.940857   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:37.940885   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:38.019210   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:38.019232   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:38.019245   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:38.112428   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:38.112471   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:40.657215   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:40.670824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:40.670900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:40.704008   71929 cri.go:89] found id: ""
	I0717 01:59:40.704030   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.704040   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:40.704048   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:40.704102   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:40.739544   71929 cri.go:89] found id: ""
	I0717 01:59:40.739576   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.739587   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:40.739595   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:40.739664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:40.773132   71929 cri.go:89] found id: ""
	I0717 01:59:40.773159   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.773169   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:40.773177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:40.773239   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:40.810162   71929 cri.go:89] found id: ""
	I0717 01:59:40.810183   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.810193   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:40.810200   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:40.810256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:40.844797   71929 cri.go:89] found id: ""
	I0717 01:59:40.844829   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.844840   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:40.844847   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:40.844918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:40.884444   71929 cri.go:89] found id: ""
	I0717 01:59:40.884468   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.884476   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:40.884482   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:40.884544   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:40.919413   71929 cri.go:89] found id: ""
	I0717 01:59:40.919437   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.919445   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:40.919451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:40.919505   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:40.961870   71929 cri.go:89] found id: ""
	I0717 01:59:40.961894   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.961902   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:40.961910   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:40.961921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:41.010600   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:41.010638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:41.025557   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:41.025589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:41.100100   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:41.100123   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:41.100135   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:41.185809   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:41.185850   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:39.802297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.802803   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.432998   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.433412   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:40.779796   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:42.781981   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.279014   71522 pod_ready.go:81] duration metric: took 4m0.006085275s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:59:43.279043   71522 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:59:43.279053   71522 pod_ready.go:38] duration metric: took 4m2.008175999s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:59:43.279073   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:59:43.279105   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.279162   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.327674   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:43.327725   71522 cri.go:89] found id: ""
	I0717 01:59:43.327734   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:43.327801   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.332247   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.332303   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.371598   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.371627   71522 cri.go:89] found id: ""
	I0717 01:59:43.371635   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:43.371683   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.377203   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.377265   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.416351   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.416374   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.416380   71522 cri.go:89] found id: ""
	I0717 01:59:43.416389   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:43.416448   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.420909   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.425228   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.425278   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.472117   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.472139   71522 cri.go:89] found id: ""
	I0717 01:59:43.472147   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:43.472194   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.476632   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.476698   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.517337   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:43.517360   71522 cri.go:89] found id: ""
	I0717 01:59:43.517369   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:43.517430   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.522437   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.522519   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.564511   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.564530   71522 cri.go:89] found id: ""
	I0717 01:59:43.564537   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:43.564595   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.570357   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.570440   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:43.615389   71522 cri.go:89] found id: ""
	I0717 01:59:43.615418   71522 logs.go:276] 0 containers: []
	W0717 01:59:43.615427   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:43.615433   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:43.615543   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:43.652739   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:43.652764   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:43.652769   71522 cri.go:89] found id: ""
	I0717 01:59:43.652777   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:43.652835   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.657323   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.661682   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:43.661702   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.714396   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:43.714434   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.761072   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:43.761110   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:43.825934   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:43.825963   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.871287   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:43.871316   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.907488   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:43.907517   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.949876   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:43.949903   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:44.093084   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:44.093116   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:44.153161   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:44.153206   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:44.197219   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:44.197249   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:44.242441   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:44.242478   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:44.288622   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.288646   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.839680   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.839712   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.854119   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:44.854145   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.725542   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:43.739304   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.739379   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.776754   71929 cri.go:89] found id: ""
	I0717 01:59:43.776783   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.776795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:43.776802   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.776863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.819729   71929 cri.go:89] found id: ""
	I0717 01:59:43.819756   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.819767   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:43.819774   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.819828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.860283   71929 cri.go:89] found id: ""
	I0717 01:59:43.860311   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.860322   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:43.860329   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.860391   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.898684   71929 cri.go:89] found id: ""
	I0717 01:59:43.898712   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.898722   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:43.898729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.898788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.942996   71929 cri.go:89] found id: ""
	I0717 01:59:43.943019   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.943026   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:43.943031   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.943077   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.981799   71929 cri.go:89] found id: ""
	I0717 01:59:43.981828   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.981839   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:43.981846   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.981903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:44.018222   71929 cri.go:89] found id: ""
	I0717 01:59:44.018252   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.018262   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:44.018267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:44.018326   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:44.056264   71929 cri.go:89] found id: ""
	I0717 01:59:44.056293   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.056304   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:44.056315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.056334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.172061   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:44.172108   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:44.219597   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:44.219627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:44.272299   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.272334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.287811   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:44.287848   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:44.379183   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:46.879529   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:46.893142   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:46.893207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:46.929073   71929 cri.go:89] found id: ""
	I0717 01:59:46.929101   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.929113   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:46.929121   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:46.929173   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:46.963697   71929 cri.go:89] found id: ""
	I0717 01:59:46.963725   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.963733   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:46.963739   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:46.963798   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.000697   71929 cri.go:89] found id: ""
	I0717 01:59:47.000730   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.000747   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:47.000752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.000804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.037270   71929 cri.go:89] found id: ""
	I0717 01:59:47.037304   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.037316   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:47.037323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.037382   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.072210   71929 cri.go:89] found id: ""
	I0717 01:59:47.072238   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.072249   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:47.072256   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.072321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.108404   71929 cri.go:89] found id: ""
	I0717 01:59:47.108432   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.108443   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:47.108451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.108535   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.146122   71929 cri.go:89] found id: ""
	I0717 01:59:47.146151   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.146162   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.146169   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:47.146225   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:47.187418   71929 cri.go:89] found id: ""
	I0717 01:59:47.187446   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.187455   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:47.187466   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:47.187481   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:47.201023   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:47.201053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:47.269851   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:47.269878   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.269894   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:47.356417   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:47.356456   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.803326   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:46.302939   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:45.433688   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.933271   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:49.934222   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.403005   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:47.420984   71522 api_server.go:72] duration metric: took 4m13.369710312s to wait for apiserver process to appear ...
	I0717 01:59:47.421011   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:59:47.421065   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:47.421128   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:47.465800   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:47.465830   71522 cri.go:89] found id: ""
	I0717 01:59:47.465838   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:47.465890   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.470561   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:47.470628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:47.513302   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:47.513321   71522 cri.go:89] found id: ""
	I0717 01:59:47.513328   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:47.513373   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.517668   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:47.517720   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.563466   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:47.563491   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:47.563495   71522 cri.go:89] found id: ""
	I0717 01:59:47.563502   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:47.563563   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.568058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.572381   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.572432   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.618919   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:47.618944   71522 cri.go:89] found id: ""
	I0717 01:59:47.618953   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:47.619014   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.623475   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.623525   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.662294   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:47.662321   71522 cri.go:89] found id: ""
	I0717 01:59:47.662329   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:47.662384   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.666740   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.666806   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.708962   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:47.708990   71522 cri.go:89] found id: ""
	I0717 01:59:47.708999   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:47.709058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.713551   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.713628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.750766   71522 cri.go:89] found id: ""
	I0717 01:59:47.750797   71522 logs.go:276] 0 containers: []
	W0717 01:59:47.750807   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.750814   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:47.750878   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:47.786664   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:47.786687   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:47.786692   71522 cri.go:89] found id: ""
	I0717 01:59:47.786699   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:47.786761   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.791460   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.795553   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.795576   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:48.298229   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:48.298271   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:48.313542   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:48.313573   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:48.429625   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:48.429663   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:48.475651   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:48.475677   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:48.514075   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:48.514101   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:48.550152   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:48.550182   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:48.592743   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:48.592771   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:48.652433   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:48.652464   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:48.699763   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:48.699796   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:48.737467   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:48.737504   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:48.788389   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:48.788425   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:48.842323   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:48.842357   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:48.900716   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:48.900746   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:47.397763   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:47.397791   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:49.954670   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:49.968840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:49.968898   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:50.003598   71929 cri.go:89] found id: ""
	I0717 01:59:50.003635   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.003646   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:50.003654   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:50.003714   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:50.040494   71929 cri.go:89] found id: ""
	I0717 01:59:50.040546   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.040558   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:50.040564   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:50.040624   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:50.074921   71929 cri.go:89] found id: ""
	I0717 01:59:50.074950   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.074959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:50.074965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:50.075015   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:50.117002   71929 cri.go:89] found id: ""
	I0717 01:59:50.117030   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.117041   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:50.117049   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:50.117106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:50.163026   71929 cri.go:89] found id: ""
	I0717 01:59:50.163052   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.163063   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:50.163071   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:50.163129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:50.197709   71929 cri.go:89] found id: ""
	I0717 01:59:50.197738   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.197749   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:50.197757   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:50.197838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:50.237776   71929 cri.go:89] found id: ""
	I0717 01:59:50.237808   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.237819   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:50.237827   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:50.237886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:50.275147   71929 cri.go:89] found id: ""
	I0717 01:59:50.275179   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.275189   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:50.275201   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:50.275215   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:50.329025   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:50.329057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:50.342745   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:50.342777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:50.417792   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:50.417817   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:50.417829   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:50.495288   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:50.495322   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:48.306102   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:50.804255   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:52.433248   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:54.931595   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:51.447495   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:59:51.452186   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:59:51.453112   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:59:51.453137   71522 api_server.go:131] duration metric: took 4.032118004s to wait for apiserver health ...
	I0717 01:59:51.453146   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:59:51.453170   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:51.453215   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:51.491272   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:51.491297   71522 cri.go:89] found id: ""
	I0717 01:59:51.491305   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:51.491365   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.495747   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:51.495795   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:51.538807   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:51.538830   71522 cri.go:89] found id: ""
	I0717 01:59:51.538838   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:51.538891   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.543454   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:51.543512   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:51.586258   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:51.586292   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:51.586296   71522 cri.go:89] found id: ""
	I0717 01:59:51.586306   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:51.586360   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.590446   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.594867   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:51.594936   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:51.636079   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:51.636101   71522 cri.go:89] found id: ""
	I0717 01:59:51.636108   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:51.636159   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.640225   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:51.640283   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:51.676395   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:51.676422   71522 cri.go:89] found id: ""
	I0717 01:59:51.676432   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:51.676496   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.680974   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:51.681043   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:51.720449   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.720476   71522 cri.go:89] found id: ""
	I0717 01:59:51.720483   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:51.720527   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.724704   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:51.724779   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:51.762892   71522 cri.go:89] found id: ""
	I0717 01:59:51.762923   71522 logs.go:276] 0 containers: []
	W0717 01:59:51.762932   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:51.762939   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:51.762986   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:51.803675   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.803702   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.803707   71522 cri.go:89] found id: ""
	I0717 01:59:51.803714   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:51.803807   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.808188   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.812046   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:51.812065   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:51.855800   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:51.855832   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.917804   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:51.917833   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.958797   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:51.958827   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.997003   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:51.997034   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:52.118345   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:52.118381   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:52.174308   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:52.174344   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:52.578823   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:52.578857   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:52.619962   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:52.619994   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:52.667564   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:52.667593   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:52.714716   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:52.714747   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:52.774123   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:52.774171   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:52.788399   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:52.788432   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:52.839796   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:52.839828   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:55.388404   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:59:55.388441   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.388448   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.388453   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.388458   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.388465   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.388469   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.388473   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.388484   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.388491   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.388499   71522 system_pods.go:74] duration metric: took 3.93534618s to wait for pod list to return data ...
	I0717 01:59:55.388509   71522 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:59:55.390798   71522 default_sa.go:45] found service account: "default"
	I0717 01:59:55.390829   71522 default_sa.go:55] duration metric: took 2.313714ms for default service account to be created ...
	I0717 01:59:55.390840   71522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:59:55.399028   71522 system_pods.go:86] 9 kube-system pods found
	I0717 01:59:55.399049   71522 system_pods.go:89] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.399054   71522 system_pods.go:89] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.399059   71522 system_pods.go:89] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.399063   71522 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.399068   71522 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.399072   71522 system_pods.go:89] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.399076   71522 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.399083   71522 system_pods.go:89] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.399090   71522 system_pods.go:89] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.399099   71522 system_pods.go:126] duration metric: took 8.253468ms to wait for k8s-apps to be running ...
	I0717 01:59:55.399108   71522 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:59:55.399152   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:59:55.417081   71522 system_svc.go:56] duration metric: took 17.965716ms WaitForService to wait for kubelet
	I0717 01:59:55.417109   71522 kubeadm.go:582] duration metric: took 4m21.36584166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:59:55.417130   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:59:55.420078   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:59:55.420099   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:59:55.420109   71522 node_conditions.go:105] duration metric: took 2.974324ms to run NodePressure ...
	I0717 01:59:55.420119   71522 start.go:241] waiting for startup goroutines ...
	I0717 01:59:55.420126   71522 start.go:246] waiting for cluster config update ...
	I0717 01:59:55.420136   71522 start.go:255] writing updated cluster config ...
	I0717 01:59:55.420406   71522 ssh_runner.go:195] Run: rm -f paused
	I0717 01:59:55.470793   71522 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:59:55.472960   71522 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-738184" cluster and "default" namespace by default
	I0717 01:59:53.036151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:53.049820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:53.049879   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:53.087144   71929 cri.go:89] found id: ""
	I0717 01:59:53.087175   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.087189   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:53.087195   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:53.087253   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:53.123135   71929 cri.go:89] found id: ""
	I0717 01:59:53.123164   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.123175   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:53.123191   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:53.123254   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:53.157887   71929 cri.go:89] found id: ""
	I0717 01:59:53.157912   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.157922   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:53.157927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:53.158004   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:53.201002   71929 cri.go:89] found id: ""
	I0717 01:59:53.201033   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.201045   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:53.201054   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:53.201115   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:53.236159   71929 cri.go:89] found id: ""
	I0717 01:59:53.236188   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.236198   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:53.236204   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:53.236258   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:53.277585   71929 cri.go:89] found id: ""
	I0717 01:59:53.277616   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.277627   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:53.277634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:53.277694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:53.322722   71929 cri.go:89] found id: ""
	I0717 01:59:53.322747   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.322758   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:53.322765   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:53.322824   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:53.364112   71929 cri.go:89] found id: ""
	I0717 01:59:53.364138   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.364149   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:53.364159   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:53.364172   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:53.418701   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:53.418739   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:53.435004   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:53.435030   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:53.511254   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:53.511274   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:53.511287   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:53.587967   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:53.588003   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:56.130773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:56.144742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:56.144811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:56.180267   71929 cri.go:89] found id: ""
	I0717 01:59:56.180295   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.180306   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:56.180313   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:56.180373   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:56.217223   71929 cri.go:89] found id: ""
	I0717 01:59:56.217252   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.217263   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:56.217269   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:56.217334   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:56.251714   71929 cri.go:89] found id: ""
	I0717 01:59:56.251738   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.251745   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:56.251752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:56.251805   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:56.292557   71929 cri.go:89] found id: ""
	I0717 01:59:56.292589   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.292597   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:56.292603   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:56.292653   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:56.332463   71929 cri.go:89] found id: ""
	I0717 01:59:56.332491   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.332501   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:56.332508   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:56.332562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:56.372155   71929 cri.go:89] found id: ""
	I0717 01:59:56.372180   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.372189   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:56.372197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:56.372255   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:56.415768   71929 cri.go:89] found id: ""
	I0717 01:59:56.415794   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.415806   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:56.415813   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:56.415871   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:56.456920   71929 cri.go:89] found id: ""
	I0717 01:59:56.456951   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.456959   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:56.456968   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:56.456978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:56.508932   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:56.508965   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:56.522496   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:56.522531   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:56.596839   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:56.596857   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:56.596870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:56.679237   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:56.679271   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:53.303565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:55.803725   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:57.806129   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:56.933245   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.432536   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.220084   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:59.233108   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:59.233182   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:59.266796   71929 cri.go:89] found id: ""
	I0717 01:59:59.266827   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.266838   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:59.266845   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:59.266909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:59.297992   71929 cri.go:89] found id: ""
	I0717 01:59:59.298017   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.298026   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:59.298032   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:59.298087   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:59.331953   71929 cri.go:89] found id: ""
	I0717 01:59:59.331982   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.331993   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:59.331999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:59.332069   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:59.368912   71929 cri.go:89] found id: ""
	I0717 01:59:59.368939   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.368948   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:59.368954   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:59.369002   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:59.402886   71929 cri.go:89] found id: ""
	I0717 01:59:59.402911   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.402920   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:59.402926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:59.402982   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:59.441227   71929 cri.go:89] found id: ""
	I0717 01:59:59.441249   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.441257   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:59.441263   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:59.441322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:59.479154   71929 cri.go:89] found id: ""
	I0717 01:59:59.479191   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.479213   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:59.479222   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:59.479286   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:59.516259   71929 cri.go:89] found id: ""
	I0717 01:59:59.516299   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.516309   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:59.516319   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:59.516332   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:59.596352   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:59.596385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:59.639712   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:59.639744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:59.691399   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:59.691444   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:59.706618   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:59.706648   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:59.778875   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.279246   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:02.293212   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:02.293284   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:02.330759   71929 cri.go:89] found id: ""
	I0717 02:00:02.330786   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.330795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:02.330800   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:02.330848   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:02.366257   71929 cri.go:89] found id: ""
	I0717 02:00:02.366287   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.366298   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:02.366305   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:02.366368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:00.303868   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.311063   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:01.432671   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:03.433059   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.404321   71929 cri.go:89] found id: ""
	I0717 02:00:02.404348   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.404358   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:02.404364   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:02.404432   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:02.444297   71929 cri.go:89] found id: ""
	I0717 02:00:02.444326   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.444342   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:02.444349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:02.444406   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:02.478433   71929 cri.go:89] found id: ""
	I0717 02:00:02.478466   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.478477   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:02.478483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:02.478530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:02.515519   71929 cri.go:89] found id: ""
	I0717 02:00:02.515551   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.515560   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:02.515566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:02.515618   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:02.551006   71929 cri.go:89] found id: ""
	I0717 02:00:02.551030   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.551038   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:02.551044   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:02.551110   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:02.588312   71929 cri.go:89] found id: ""
	I0717 02:00:02.588345   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.588356   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:02.588367   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:02.588381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:02.641900   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:02.641932   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:02.656851   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:02.656896   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:02.728286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.728315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:02.728327   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:02.806807   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:02.806847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.355196   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:05.369148   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:05.369231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:05.405012   71929 cri.go:89] found id: ""
	I0717 02:00:05.405045   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.405057   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:05.405068   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:05.405132   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:05.450524   71929 cri.go:89] found id: ""
	I0717 02:00:05.450564   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.450575   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:05.450582   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:05.450637   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:05.487503   71929 cri.go:89] found id: ""
	I0717 02:00:05.487533   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.487544   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:05.487553   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:05.487634   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:05.522607   71929 cri.go:89] found id: ""
	I0717 02:00:05.522635   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.522650   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:05.522656   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:05.522703   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:05.558091   71929 cri.go:89] found id: ""
	I0717 02:00:05.558120   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.558131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:05.558138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:05.558192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:05.594540   71929 cri.go:89] found id: ""
	I0717 02:00:05.594587   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.594598   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:05.594605   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:05.594668   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:05.631783   71929 cri.go:89] found id: ""
	I0717 02:00:05.631807   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.631818   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:05.631825   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:05.631886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:05.667494   71929 cri.go:89] found id: ""
	I0717 02:00:05.667523   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.667532   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:05.667543   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:05.667559   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:05.681348   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:05.681373   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:05.747143   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:05.747165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:05.747176   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:05.829639   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:05.829674   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.881984   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:05.882013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:04.803913   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.302068   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:05.434869   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.435174   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:09.931879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:08.435873   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:08.449840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:08.449901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:08.489613   71929 cri.go:89] found id: ""
	I0717 02:00:08.489663   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.489675   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:08.489684   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:08.489751   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:08.526604   71929 cri.go:89] found id: ""
	I0717 02:00:08.526635   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.526645   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:08.526660   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:08.526717   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:08.563202   71929 cri.go:89] found id: ""
	I0717 02:00:08.563227   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.563234   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:08.563240   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:08.563299   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:08.598336   71929 cri.go:89] found id: ""
	I0717 02:00:08.598365   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.598376   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:08.598383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:08.598441   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:08.632626   71929 cri.go:89] found id: ""
	I0717 02:00:08.632660   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.632671   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:08.632678   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:08.632739   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:08.667951   71929 cri.go:89] found id: ""
	I0717 02:00:08.667977   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.667993   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:08.668001   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:08.668059   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:08.702106   71929 cri.go:89] found id: ""
	I0717 02:00:08.702135   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.702146   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:08.702153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:08.702212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:08.733469   71929 cri.go:89] found id: ""
	I0717 02:00:08.733491   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.733499   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:08.733508   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:08.733518   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:08.787930   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:08.787966   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:08.802761   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:08.802795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:08.878115   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:08.878138   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:08.878149   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:08.962509   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:08.962543   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:11.503151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:11.518019   71929 kubeadm.go:597] duration metric: took 4m3.576613508s to restartPrimaryControlPlane
	W0717 02:00:11.518087   71929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:00:11.518113   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:00:11.970514   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:11.986794   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:00:11.997382   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:00:12.006789   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:00:12.006816   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:00:12.006867   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:00:12.015864   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:00:12.015921   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:00:12.025239   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:00:12.034315   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:00:12.034373   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:00:12.043533   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.052344   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:00:12.052393   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.061290   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:00:12.070311   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:00:12.070375   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:00:12.080404   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:00:12.318084   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:00:09.303502   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.803893   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.933539   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:14.433949   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:13.804007   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.303079   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.932416   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.932721   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.303306   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:20.306811   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:22.803374   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:21.433157   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:23.433283   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:24.805822   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.301985   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:25.931740   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.934346   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:29.302199   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:31.302607   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:30.433033   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:32.434743   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:34.933166   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:33.802140   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:35.803338   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:36.933672   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:39.432879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:38.302050   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:40.803322   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:41.932491   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:44.436201   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:43.302028   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:45.801979   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.303644   71146 pod_ready.go:81] duration metric: took 4m0.007411484s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 02:00:47.303668   71146 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 02:00:47.303678   71146 pod_ready.go:38] duration metric: took 4m7.053721739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:00:47.303694   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:00:47.303725   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:47.303791   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:47.365247   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:47.365272   71146 cri.go:89] found id: ""
	I0717 02:00:47.365279   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:47.365339   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.370201   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:47.370268   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:47.416627   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:47.416652   71146 cri.go:89] found id: ""
	I0717 02:00:47.416663   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:47.416731   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.421295   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:47.421454   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:47.463532   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.463556   71146 cri.go:89] found id: ""
	I0717 02:00:47.463564   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:47.463626   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.468291   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:47.468414   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:47.504328   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:47.504354   71146 cri.go:89] found id: ""
	I0717 02:00:47.504362   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:47.504445   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.508821   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:47.508880   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:47.550970   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.550996   71146 cri.go:89] found id: ""
	I0717 02:00:47.551006   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:47.551069   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.555974   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:47.556045   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:47.609884   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:47.609903   71146 cri.go:89] found id: ""
	I0717 02:00:47.609910   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:47.609968   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.615544   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:47.615603   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:47.653071   71146 cri.go:89] found id: ""
	I0717 02:00:47.653099   71146 logs.go:276] 0 containers: []
	W0717 02:00:47.653110   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:47.653117   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:47.653163   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:47.690462   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.690485   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:47.690490   71146 cri.go:89] found id: ""
	I0717 02:00:47.690498   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:47.690545   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.695196   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.699099   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:47.699117   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:47.816750   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:47.816782   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:46.932764   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:49.432402   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.869306   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:47.869341   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.906717   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:47.906755   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.944125   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:47.944152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.978632   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:47.978664   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:48.482628   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:48.482660   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:48.538252   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:48.538300   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:48.553011   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:48.553038   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:48.607632   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:48.607666   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:48.646122   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:48.646151   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:48.689948   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:48.689980   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:48.738285   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:48.738334   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.290996   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:51.308850   71146 api_server.go:72] duration metric: took 4m18.27461618s to wait for apiserver process to appear ...
	I0717 02:00:51.308873   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:00:51.308907   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:51.308967   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:51.350827   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.350857   71146 cri.go:89] found id: ""
	I0717 02:00:51.350866   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:51.350930   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.355308   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:51.355369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:51.393804   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.393831   71146 cri.go:89] found id: ""
	I0717 02:00:51.393840   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:51.393897   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.398144   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:51.398201   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:51.437974   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:51.437991   71146 cri.go:89] found id: ""
	I0717 02:00:51.437998   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:51.438044   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.442318   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:51.442382   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:51.478462   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.478481   71146 cri.go:89] found id: ""
	I0717 02:00:51.478489   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:51.478534   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.482624   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:51.482672   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:51.526089   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.526114   71146 cri.go:89] found id: ""
	I0717 02:00:51.526123   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:51.526170   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.530855   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:51.530923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:51.568875   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.568899   71146 cri.go:89] found id: ""
	I0717 02:00:51.568908   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:51.568972   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.573300   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:51.573369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:51.615775   71146 cri.go:89] found id: ""
	I0717 02:00:51.615800   71146 logs.go:276] 0 containers: []
	W0717 02:00:51.615809   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:51.615815   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:51.615876   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:51.658100   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:51.658124   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:51.658130   71146 cri.go:89] found id: ""
	I0717 02:00:51.658138   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:51.658183   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.663030   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.667348   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:51.667372   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.715502   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:51.715534   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.763431   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:51.763457   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.805523   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:51.805553   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.859660   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:51.859692   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:51.963831   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:51.963858   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:51.978152   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:51.978179   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:52.023897   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:52.023926   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:52.062193   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:52.062218   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:52.098487   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:52.098518   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:52.135733   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:52.135758   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:52.562245   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:52.562279   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:52.624258   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:52.624288   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:51.434060   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:53.933730   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:55.176270   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 02:00:55.180760   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 02:00:55.181928   71146 api_server.go:141] control plane version: v1.30.2
	I0717 02:00:55.181947   71146 api_server.go:131] duration metric: took 3.873068874s to wait for apiserver health ...
	I0717 02:00:55.181955   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:00:55.181975   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:55.182017   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:55.218028   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:55.218059   71146 cri.go:89] found id: ""
	I0717 02:00:55.218068   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:55.218125   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.222841   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:55.222911   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:55.265613   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.265638   71146 cri.go:89] found id: ""
	I0717 02:00:55.265647   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:55.265699   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.269866   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:55.269923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:55.306363   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:55.306390   71146 cri.go:89] found id: ""
	I0717 02:00:55.306400   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:55.306457   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.310843   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:55.310901   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:55.354417   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:55.354439   71146 cri.go:89] found id: ""
	I0717 02:00:55.354449   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:55.354503   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.358988   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:55.359038   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:55.396457   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.396480   71146 cri.go:89] found id: ""
	I0717 02:00:55.396488   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:55.396532   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.401185   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:55.401244   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:55.438249   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:55.438276   71146 cri.go:89] found id: ""
	I0717 02:00:55.438286   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:55.438344   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.442967   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:55.443048   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:55.484173   71146 cri.go:89] found id: ""
	I0717 02:00:55.484197   71146 logs.go:276] 0 containers: []
	W0717 02:00:55.484205   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:55.484210   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:55.484288   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:55.525757   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.525780   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.525784   71146 cri.go:89] found id: ""
	I0717 02:00:55.525790   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:55.525842   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.530253   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.534253   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:55.534275   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.578993   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:55.579018   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.622746   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:55.622771   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.660900   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:55.660931   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.709803   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:55.709833   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:56.092339   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:56.092390   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:56.130951   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:56.130976   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:56.186113   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:56.186152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:56.229794   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:56.229839   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:56.285798   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:56.285845   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:56.300391   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:56.300421   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:56.425621   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:56.425653   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:56.478853   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:56.478882   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:59.026000   71146 system_pods.go:59] 8 kube-system pods found
	I0717 02:00:59.026028   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.026033   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.026036   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.026039   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.026042   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.026045   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.026051   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.026054   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.026062   71146 system_pods.go:74] duration metric: took 3.844102201s to wait for pod list to return data ...
	I0717 02:00:59.026069   71146 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:00:59.028810   71146 default_sa.go:45] found service account: "default"
	I0717 02:00:59.028831   71146 default_sa.go:55] duration metric: took 2.756364ms for default service account to be created ...
	I0717 02:00:59.028838   71146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:00:59.036427   71146 system_pods.go:86] 8 kube-system pods found
	I0717 02:00:59.036457   71146 system_pods.go:89] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.036466   71146 system_pods.go:89] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.036474   71146 system_pods.go:89] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.036482   71146 system_pods.go:89] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.036489   71146 system_pods.go:89] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.036499   71146 system_pods.go:89] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.036509   71146 system_pods.go:89] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.036519   71146 system_pods.go:89] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.036532   71146 system_pods.go:126] duration metric: took 7.688074ms to wait for k8s-apps to be running ...
	I0717 02:00:59.036542   71146 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:00:59.036594   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:59.052023   71146 system_svc.go:56] duration metric: took 15.474441ms WaitForService to wait for kubelet
	I0717 02:00:59.052049   71146 kubeadm.go:582] duration metric: took 4m26.017816269s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:00:59.052073   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:00:59.054763   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:00:59.054784   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 02:00:59.054795   71146 node_conditions.go:105] duration metric: took 2.714349ms to run NodePressure ...
	I0717 02:00:59.054805   71146 start.go:241] waiting for startup goroutines ...
	I0717 02:00:59.054811   71146 start.go:246] waiting for cluster config update ...
	I0717 02:00:59.054824   71146 start.go:255] writing updated cluster config ...
	I0717 02:00:59.055069   71146 ssh_runner.go:195] Run: rm -f paused
	I0717 02:00:59.101243   71146 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 02:00:59.103341   71146 out.go:177] * Done! kubectl is now configured to use "embed-certs-940222" cluster and "default" namespace by default
	I0717 02:00:56.432853   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:58.433589   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:00.932978   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:02.933289   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:05.433003   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:07.433470   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:09.433795   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:11.933112   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:14.433274   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:16.932102   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:18.932904   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:20.933023   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:23.433153   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:24.926132   71603 pod_ready.go:81] duration metric: took 4m0.000155151s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	E0717 02:01:24.926165   71603 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 02:01:24.926185   71603 pod_ready.go:38] duration metric: took 4m39.916322674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:01:24.926214   71603 kubeadm.go:597] duration metric: took 5m27.432375382s to restartPrimaryControlPlane
	W0717 02:01:24.926303   71603 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:01:24.926339   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:01:51.790820   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.86445583s)
	I0717 02:01:51.790901   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:01:51.812968   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:01:51.835689   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:01:51.848832   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:01:51.848859   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 02:01:51.848911   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:01:51.876554   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:01:51.876620   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:01:51.891580   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:01:51.901279   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:01:51.901328   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:01:51.910994   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.920959   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:01:51.921020   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.931039   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:01:51.940496   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:01:51.940549   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:01:51.950455   71603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:01:51.999712   71603 kubeadm.go:310] W0717 02:01:51.966911    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.000573   71603 kubeadm.go:310] W0717 02:01:51.967749    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.132406   71603 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:02:01.065590   71603 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 02:02:01.065670   71603 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:01.065761   71603 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:01.065909   71603 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:01.066049   71603 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 02:02:01.066124   71603 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:01.067867   71603 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:01.067966   71603 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:01.068043   71603 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:01.068139   71603 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:01.068210   71603 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:01.068310   71603 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:01.068391   71603 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:01.068471   71603 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:01.068523   71603 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:01.068585   71603 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:01.068650   71603 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:01.068683   71603 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:01.068752   71603 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:01.068822   71603 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:01.068906   71603 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 02:02:01.068970   71603 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:01.069057   71603 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:01.069157   71603 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:01.069271   71603 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:01.069369   71603 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:01.070772   71603 out.go:204]   - Booting up control plane ...
	I0717 02:02:01.070883   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:01.070981   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:01.071088   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:01.071206   71603 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:01.071311   71603 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:01.071365   71603 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:01.071497   71603 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 02:02:01.071557   71603 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 02:02:01.071608   71603 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044041ms
	I0717 02:02:01.071663   71603 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 02:02:01.071725   71603 kubeadm.go:310] [api-check] The API server is healthy after 5.501034024s
	I0717 02:02:01.071823   71603 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 02:02:01.071926   71603 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 02:02:01.071975   71603 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 02:02:01.072168   71603 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-391501 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 02:02:01.072238   71603 kubeadm.go:310] [bootstrap-token] Using token: jhnlja.0tmcz1ce1lkie6op
	I0717 02:02:01.073965   71603 out.go:204]   - Configuring RBAC rules ...
	I0717 02:02:01.074091   71603 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 02:02:01.074223   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 02:02:01.074390   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 02:02:01.074597   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 02:02:01.074766   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 02:02:01.074887   71603 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 02:02:01.075058   71603 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 02:02:01.075126   71603 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 02:02:01.075195   71603 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 02:02:01.075204   71603 kubeadm.go:310] 
	I0717 02:02:01.075255   71603 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 02:02:01.075262   71603 kubeadm.go:310] 
	I0717 02:02:01.075372   71603 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 02:02:01.075386   71603 kubeadm.go:310] 
	I0717 02:02:01.075419   71603 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 02:02:01.075498   71603 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 02:02:01.075582   71603 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 02:02:01.075604   71603 kubeadm.go:310] 
	I0717 02:02:01.075687   71603 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 02:02:01.075697   71603 kubeadm.go:310] 
	I0717 02:02:01.075759   71603 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 02:02:01.075771   71603 kubeadm.go:310] 
	I0717 02:02:01.075834   71603 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 02:02:01.075936   71603 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 02:02:01.076034   71603 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 02:02:01.076043   71603 kubeadm.go:310] 
	I0717 02:02:01.076142   71603 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 02:02:01.076248   71603 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 02:02:01.076256   71603 kubeadm.go:310] 
	I0717 02:02:01.076379   71603 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076541   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 \
	I0717 02:02:01.076582   71603 kubeadm.go:310] 	--control-plane 
	I0717 02:02:01.076600   71603 kubeadm.go:310] 
	I0717 02:02:01.076708   71603 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 02:02:01.076719   71603 kubeadm.go:310] 
	I0717 02:02:01.076819   71603 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076955   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 
	I0717 02:02:01.076972   71603 cni.go:84] Creating CNI manager for ""
	I0717 02:02:01.076981   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 02:02:01.078801   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 02:02:01.080151   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 02:02:01.093210   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 02:02:01.116656   71603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 02:02:01.116712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.116756   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-391501 minikube.k8s.io/updated_at=2024_07_17T02_02_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=no-preload-391501 minikube.k8s.io/primary=true
	I0717 02:02:01.314407   71603 ops.go:34] apiserver oom_adj: -16
	I0717 02:02:01.314467   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.814693   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.315439   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.814676   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.314734   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.814702   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.315450   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.815112   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.315144   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.814712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.921356   71603 kubeadm.go:1113] duration metric: took 4.80469441s to wait for elevateKubeSystemPrivileges
	I0717 02:02:05.921398   71603 kubeadm.go:394] duration metric: took 6m8.48278775s to StartCluster
	I0717 02:02:05.921420   71603 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.921508   71603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 02:02:05.923844   71603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.924156   71603 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 02:02:05.924254   71603 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 02:02:05.924328   71603 addons.go:69] Setting storage-provisioner=true in profile "no-preload-391501"
	I0717 02:02:05.924357   71603 addons.go:234] Setting addon storage-provisioner=true in "no-preload-391501"
	I0717 02:02:05.924355   71603 addons.go:69] Setting default-storageclass=true in profile "no-preload-391501"
	I0717 02:02:05.924364   71603 addons.go:69] Setting metrics-server=true in profile "no-preload-391501"
	I0717 02:02:05.924391   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:02:05.924398   71603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-391501"
	I0717 02:02:05.924404   71603 addons.go:234] Setting addon metrics-server=true in "no-preload-391501"
	W0717 02:02:05.924414   71603 addons.go:243] addon metrics-server should already be in state true
	W0717 02:02:05.924368   71603 addons.go:243] addon storage-provisioner should already be in state true
	I0717 02:02:05.924447   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924460   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924801   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924827   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924834   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924850   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924874   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924912   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.926034   71603 out.go:177] * Verifying Kubernetes components...
	I0717 02:02:05.927316   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:02:05.941502   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0717 02:02:05.941716   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0717 02:02:05.941969   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942299   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942492   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942509   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942873   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942902   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942933   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943175   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.943250   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943555   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0717 02:02:05.943829   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.943862   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.943996   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.944648   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.944672   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.945037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.945577   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.945613   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.947058   71603 addons.go:234] Setting addon default-storageclass=true in "no-preload-391501"
	W0717 02:02:05.947076   71603 addons.go:243] addon default-storageclass should already be in state true
	I0717 02:02:05.947103   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.947419   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.947447   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.960183   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0717 02:02:05.960662   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.961220   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.961249   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.961532   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.961777   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.962531   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0717 02:02:05.963063   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.964115   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.964120   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.964146   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.965195   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.965777   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0717 02:02:05.965802   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.965845   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.966114   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.966615   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.966635   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.966706   71603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 02:02:05.967037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.967228   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.968069   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 02:02:05.968101   71603 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 02:02:05.968121   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.969421   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.971055   71603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 02:02:05.972019   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.972494   71603 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:05.972515   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 02:02:05.972533   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.972622   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.972646   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.973122   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.973289   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.973415   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.973638   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.975702   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976091   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.976110   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976376   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.976553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.976717   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.976866   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.983061   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44967
	I0717 02:02:05.983397   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.983851   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.983867   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.984150   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.984319   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.985757   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.985973   71603 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:05.985985   71603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 02:02:05.986000   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.989238   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989627   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.989647   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989890   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.990056   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.990212   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.990412   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:06.238449   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 02:02:06.272217   71603 node_ready.go:35] waiting up to 6m0s for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281012   71603 node_ready.go:49] node "no-preload-391501" has status "Ready":"True"
	I0717 02:02:06.281031   71603 node_ready.go:38] duration metric: took 8.787329ms for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281040   71603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:06.297250   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:06.386971   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 02:02:06.386995   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 02:02:06.439822   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:06.460362   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 02:02:06.460391   71603 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 02:02:06.468640   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:06.551454   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:06.551482   71603 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 02:02:06.727518   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:07.338701   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338778   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.338874   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338900   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339119   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339217   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339230   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339273   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339291   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339301   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339314   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339240   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339386   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339575   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339592   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339648   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339711   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339736   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.357948   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.357966   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.358197   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.358212   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.694612   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.694690   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695028   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.695109   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695122   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695148   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.695160   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695404   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695421   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695432   71603 addons.go:475] Verifying addon metrics-server=true in "no-preload-391501"
	I0717 02:02:07.698298   71603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 02:02:08.622411   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:02:08.622531   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:02:08.624111   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:08.624168   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:08.624265   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:08.624391   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:08.624526   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:08.624604   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:08.626394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:08.626478   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:08.626574   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:08.626657   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:08.626735   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:08.626830   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:08.626909   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:08.627001   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:08.627095   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:08.627203   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:08.627325   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:08.627392   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:08.627469   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:08.627573   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:08.627663   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:08.627753   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:08.627836   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:08.627997   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:08.628107   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:08.628179   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:08.628272   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:08.630262   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:08.630372   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:08.630477   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:08.630594   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:08.630729   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:08.630960   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:08.631020   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:08.631099   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631293   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631394   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631648   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631748   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631925   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632050   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632253   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632327   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632528   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632546   71929 kubeadm.go:310] 
	I0717 02:02:08.632611   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:02:08.632671   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:02:08.632689   71929 kubeadm.go:310] 
	I0717 02:02:08.632729   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:02:08.632772   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:02:08.632902   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:02:08.632914   71929 kubeadm.go:310] 
	I0717 02:02:08.633001   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:02:08.633030   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:02:08.633075   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:02:08.633092   71929 kubeadm.go:310] 
	I0717 02:02:08.633204   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:02:08.633281   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:02:08.633306   71929 kubeadm.go:310] 
	I0717 02:02:08.633450   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:02:08.633535   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:02:08.633597   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:02:08.633668   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:02:08.633697   71929 kubeadm.go:310] 
	W0717 02:02:08.633780   71929 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 02:02:08.633821   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:02:09.101394   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:09.119918   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:02:09.130974   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:02:09.131002   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:02:09.131046   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:02:09.142720   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:02:09.142790   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:02:09.154990   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:02:09.166317   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:02:09.166379   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:02:09.176756   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.186639   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:02:09.186697   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.196778   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:02:09.206420   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:02:09.206469   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:02:09.216325   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:02:09.293311   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:09.293457   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:09.442386   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:09.442594   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:09.442736   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:09.618387   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:07.699645   71603 addons.go:510] duration metric: took 1.775390854s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 02:02:08.305410   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:09.620394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:09.620496   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:09.620593   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:09.620691   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:09.620791   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:09.620909   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:09.621004   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:09.621117   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:09.621364   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:09.621778   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:09.622072   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:09.622135   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:09.622225   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:09.990964   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:10.434990   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:10.579785   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:10.723319   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:10.746923   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:10.748370   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:10.748460   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:10.888855   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:10.890727   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:10.890860   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:10.893530   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:10.894934   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:10.896825   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:10.899127   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:10.806868   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:12.804727   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.804754   71603 pod_ready.go:81] duration metric: took 6.507471417s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.804763   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812383   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.812408   71603 pod_ready.go:81] duration metric: took 7.638012ms for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812420   71603 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320241   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.320263   71603 pod_ready.go:81] duration metric: took 507.836128ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320285   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326308   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.326332   71603 pod_ready.go:81] duration metric: took 6.041207ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326341   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331310   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.331338   71603 pod_ready.go:81] duration metric: took 4.988207ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331360   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602634   71603 pod_ready.go:92] pod "kube-proxy-gl7th" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.602677   71603 pod_ready.go:81] duration metric: took 271.310877ms for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602687   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002256   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:14.002282   71603 pod_ready.go:81] duration metric: took 399.588324ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002290   71603 pod_ready.go:38] duration metric: took 7.721240931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:14.002306   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:02:14.002355   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:02:14.016981   71603 api_server.go:72] duration metric: took 8.092789001s to wait for apiserver process to appear ...
	I0717 02:02:14.017007   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:02:14.017026   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 02:02:14.022008   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 02:02:14.022992   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 02:02:14.023010   71603 api_server.go:131] duration metric: took 5.997297ms to wait for apiserver health ...
	I0717 02:02:14.023016   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:02:14.204777   71603 system_pods.go:59] 9 kube-system pods found
	I0717 02:02:14.204806   71603 system_pods.go:61] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.204811   71603 system_pods.go:61] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.204816   71603 system_pods.go:61] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.204819   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.204823   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.204826   71603 system_pods.go:61] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.204829   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.204836   71603 system_pods.go:61] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.204839   71603 system_pods.go:61] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.204847   71603 system_pods.go:74] duration metric: took 181.825073ms to wait for pod list to return data ...
	I0717 02:02:14.204854   71603 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:02:14.402964   71603 default_sa.go:45] found service account: "default"
	I0717 02:02:14.402992   71603 default_sa.go:55] duration metric: took 198.131224ms for default service account to be created ...
	I0717 02:02:14.403005   71603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:02:14.606371   71603 system_pods.go:86] 9 kube-system pods found
	I0717 02:02:14.606408   71603 system_pods.go:89] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.606418   71603 system_pods.go:89] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.606424   71603 system_pods.go:89] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.606430   71603 system_pods.go:89] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.606438   71603 system_pods.go:89] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.606444   71603 system_pods.go:89] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.606450   71603 system_pods.go:89] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.606461   71603 system_pods.go:89] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.606474   71603 system_pods.go:89] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.606486   71603 system_pods.go:126] duration metric: took 203.473728ms to wait for k8s-apps to be running ...
	I0717 02:02:14.606497   71603 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:02:14.606568   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:14.622178   71603 system_svc.go:56] duration metric: took 15.671962ms WaitForService to wait for kubelet
	I0717 02:02:14.622211   71603 kubeadm.go:582] duration metric: took 8.698021688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:02:14.622234   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:02:14.802282   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:02:14.802309   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 02:02:14.802319   71603 node_conditions.go:105] duration metric: took 180.080727ms to run NodePressure ...
	I0717 02:02:14.802330   71603 start.go:241] waiting for startup goroutines ...
	I0717 02:02:14.802337   71603 start.go:246] waiting for cluster config update ...
	I0717 02:02:14.802345   71603 start.go:255] writing updated cluster config ...
	I0717 02:02:14.802613   71603 ssh_runner.go:195] Run: rm -f paused
	I0717 02:02:14.848725   71603 start.go:600] kubectl: 1.30.2, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 02:02:14.850965   71603 out.go:177] * Done! kubectl is now configured to use "no-preload-391501" cluster and "default" namespace by default
	I0717 02:02:50.900829   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:50.901350   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:50.901626   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:55.902558   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:55.902805   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:05.903753   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:05.904033   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:25.905383   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:25.905597   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906576   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:04:05.906960   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906992   71929 kubeadm.go:310] 
	I0717 02:04:05.907049   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:04:05.907133   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:04:05.907182   71929 kubeadm.go:310] 
	I0717 02:04:05.907252   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:04:05.907339   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:04:05.907516   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:04:05.907529   71929 kubeadm.go:310] 
	I0717 02:04:05.907661   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:04:05.907699   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:04:05.907743   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:04:05.907751   71929 kubeadm.go:310] 
	I0717 02:04:05.907907   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:04:05.908043   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:04:05.908053   71929 kubeadm.go:310] 
	I0717 02:04:05.908221   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:04:05.908435   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:04:05.908619   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:04:05.908738   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:04:05.908788   71929 kubeadm.go:310] 
	I0717 02:04:05.909079   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:04:05.909286   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:04:05.909452   71929 kubeadm.go:394] duration metric: took 7m58.01930975s to StartCluster
	I0717 02:04:05.909455   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:04:05.909494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:04:05.909552   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:04:05.952911   71929 cri.go:89] found id: ""
	I0717 02:04:05.952937   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.952949   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:04:05.952957   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:04:05.953026   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:04:05.988490   71929 cri.go:89] found id: ""
	I0717 02:04:05.988518   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.988529   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:04:05.988537   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:04:05.988593   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:04:06.025228   71929 cri.go:89] found id: ""
	I0717 02:04:06.025259   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.025269   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:04:06.025277   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:04:06.025342   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:04:06.060563   71929 cri.go:89] found id: ""
	I0717 02:04:06.060589   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.060599   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:04:06.060604   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:04:06.060660   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:04:06.095051   71929 cri.go:89] found id: ""
	I0717 02:04:06.095079   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.095091   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:04:06.095099   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:04:06.095150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:04:06.131892   71929 cri.go:89] found id: ""
	I0717 02:04:06.131914   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.131921   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:04:06.131927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:04:06.131973   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:04:06.168893   71929 cri.go:89] found id: ""
	I0717 02:04:06.168919   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.168930   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:04:06.168937   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:04:06.168995   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:04:06.206635   71929 cri.go:89] found id: ""
	I0717 02:04:06.206658   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.206668   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:04:06.206679   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:04:06.206693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:04:06.308601   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:04:06.308624   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:04:06.308637   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:04:06.422081   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:04:06.422116   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:04:06.467466   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:04:06.467496   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:04:06.521420   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:04:06.521457   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0717 02:04:06.535167   71929 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 02:04:06.535211   71929 out.go:239] * 
	W0717 02:04:06.535263   71929 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.535292   71929 out.go:239] * 
	W0717 02:04:06.536098   71929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:04:06.539314   71929 out.go:177] 
	W0717 02:04:06.540504   71929 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.540557   71929 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 02:04:06.540579   71929 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 02:04:06.541888   71929 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.163252668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182201163226338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1bfc788d-c1fa-4444-a80b-aa43f8256b42 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.163897017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed3592f3-14b0-4a9c-9c2f-2d7090afa10c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.163948932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed3592f3-14b0-4a9c-9c2f-2d7090afa10c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.164272110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181421515470942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f2fca5b2ae3ae1df486675a0c92025c8f5bbd363b4b542dfdd983c23ed1e6,PodSandboxId:0ff7965ff724480306e7cc70851e146f45e92bfc0b3947d114b46a351625aca4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181401994677797,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44f768a2-54fc-4549-a808-df47ce510fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a755044,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783,PodSandboxId:00ef9d9de4935f8acfc77b0ab351002c788e80e38e771bcff098b89701e7af25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181398357407476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wcw97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dd50538-f54d-43f1-bd8a-b9d3131c13f7,},Annotations:map[string]string{io.kubernetes.container.hash: 498ae3bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de,PodSandboxId:82f16ee888ec1043cefb38b9d7dba8f6bebad894edbeff6633949f7347d576e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721181390656105332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l58xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feae4e89-4900-4399-b
d06-7d179280667d,},Annotations:map[string]string{io.kubernetes.container.hash: 25e04d7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181390629751214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632
252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745,PodSandboxId:2be3a62518d5db650a31c3c74384f6ad931d2d976853b9d4105e8b75fae7e552,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181387004809748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e63e752d96a0d9c33d7fe914b821e640,},Annota
tions:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787,PodSandboxId:cef3e414381ec6445255c6921dd61d89d4d21591fcb63bc40b5d2d1cf0943fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181386994139574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09041104cff8f86494c416db9d9c095a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e6fa5874,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060,PodSandboxId:ee225f609c7b0c28ffa1c5964b757e4195da42bac112e280589ba49de8834321,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181386925376800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae0f3175f741e0f48ddc242abc89638f,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509,PodSandboxId:3fdeb024796c574088a93f07258e8555fc75747b27851cf483e87e5c7798d920,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181386917321866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b052781dfa44cef0464609720ccead54,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 62b42a8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed3592f3-14b0-4a9c-9c2f-2d7090afa10c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.202454178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ac9cd71-bc1c-4d61-8376-75f7a916768c name=/runtime.v1.RuntimeService/Version
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.202539412Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ac9cd71-bc1c-4d61-8376-75f7a916768c name=/runtime.v1.RuntimeService/Version
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.204446199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b320b90-eceb-4cd1-8d34-3610f94321b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.204915296Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182201204839436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b320b90-eceb-4cd1-8d34-3610f94321b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.205410611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=521fe24b-3c83-452f-8580-d6a27d6da721 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.205460918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=521fe24b-3c83-452f-8580-d6a27d6da721 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.205664349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181421515470942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f2fca5b2ae3ae1df486675a0c92025c8f5bbd363b4b542dfdd983c23ed1e6,PodSandboxId:0ff7965ff724480306e7cc70851e146f45e92bfc0b3947d114b46a351625aca4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181401994677797,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44f768a2-54fc-4549-a808-df47ce510fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a755044,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783,PodSandboxId:00ef9d9de4935f8acfc77b0ab351002c788e80e38e771bcff098b89701e7af25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181398357407476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wcw97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dd50538-f54d-43f1-bd8a-b9d3131c13f7,},Annotations:map[string]string{io.kubernetes.container.hash: 498ae3bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de,PodSandboxId:82f16ee888ec1043cefb38b9d7dba8f6bebad894edbeff6633949f7347d576e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721181390656105332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l58xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feae4e89-4900-4399-b
d06-7d179280667d,},Annotations:map[string]string{io.kubernetes.container.hash: 25e04d7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181390629751214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632
252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745,PodSandboxId:2be3a62518d5db650a31c3c74384f6ad931d2d976853b9d4105e8b75fae7e552,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181387004809748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e63e752d96a0d9c33d7fe914b821e640,},Annota
tions:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787,PodSandboxId:cef3e414381ec6445255c6921dd61d89d4d21591fcb63bc40b5d2d1cf0943fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181386994139574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09041104cff8f86494c416db9d9c095a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e6fa5874,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060,PodSandboxId:ee225f609c7b0c28ffa1c5964b757e4195da42bac112e280589ba49de8834321,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181386925376800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae0f3175f741e0f48ddc242abc89638f,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509,PodSandboxId:3fdeb024796c574088a93f07258e8555fc75747b27851cf483e87e5c7798d920,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181386917321866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b052781dfa44cef0464609720ccead54,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 62b42a8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=521fe24b-3c83-452f-8580-d6a27d6da721 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.246630180Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d231574a-37c6-455b-83ac-a2f029bd2cc1 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.246945070Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d231574a-37c6-455b-83ac-a2f029bd2cc1 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.248201719Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b411710d-ee80-4358-a170-dbfaa16bc2ba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.248592351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182201248568696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b411710d-ee80-4358-a170-dbfaa16bc2ba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.249180872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c74684ae-10af-4078-b401-8f74f84b54bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.249245643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c74684ae-10af-4078-b401-8f74f84b54bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.249507705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181421515470942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f2fca5b2ae3ae1df486675a0c92025c8f5bbd363b4b542dfdd983c23ed1e6,PodSandboxId:0ff7965ff724480306e7cc70851e146f45e92bfc0b3947d114b46a351625aca4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181401994677797,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44f768a2-54fc-4549-a808-df47ce510fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a755044,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783,PodSandboxId:00ef9d9de4935f8acfc77b0ab351002c788e80e38e771bcff098b89701e7af25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181398357407476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wcw97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dd50538-f54d-43f1-bd8a-b9d3131c13f7,},Annotations:map[string]string{io.kubernetes.container.hash: 498ae3bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de,PodSandboxId:82f16ee888ec1043cefb38b9d7dba8f6bebad894edbeff6633949f7347d576e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721181390656105332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l58xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feae4e89-4900-4399-b
d06-7d179280667d,},Annotations:map[string]string{io.kubernetes.container.hash: 25e04d7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181390629751214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632
252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745,PodSandboxId:2be3a62518d5db650a31c3c74384f6ad931d2d976853b9d4105e8b75fae7e552,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181387004809748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e63e752d96a0d9c33d7fe914b821e640,},Annota
tions:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787,PodSandboxId:cef3e414381ec6445255c6921dd61d89d4d21591fcb63bc40b5d2d1cf0943fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181386994139574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09041104cff8f86494c416db9d9c095a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e6fa5874,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060,PodSandboxId:ee225f609c7b0c28ffa1c5964b757e4195da42bac112e280589ba49de8834321,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181386925376800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae0f3175f741e0f48ddc242abc89638f,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509,PodSandboxId:3fdeb024796c574088a93f07258e8555fc75747b27851cf483e87e5c7798d920,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181386917321866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b052781dfa44cef0464609720ccead54,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 62b42a8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c74684ae-10af-4078-b401-8f74f84b54bd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.290991154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab913492-84f2-4f9c-88ff-35d7fa775c02 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.291064442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab913492-84f2-4f9c-88ff-35d7fa775c02 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.292408221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8b31eee-530f-4a5e-be48-66a4b5fb872b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.293055547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182201293027608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8b31eee-530f-4a5e-be48-66a4b5fb872b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.293577697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54b57710-f57e-40ca-8e7d-4d6735bb203c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.293629119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54b57710-f57e-40ca-8e7d-4d6735bb203c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:10:01 embed-certs-940222 crio[720]: time="2024-07-17 02:10:01.294127387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181421515470942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f2fca5b2ae3ae1df486675a0c92025c8f5bbd363b4b542dfdd983c23ed1e6,PodSandboxId:0ff7965ff724480306e7cc70851e146f45e92bfc0b3947d114b46a351625aca4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181401994677797,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44f768a2-54fc-4549-a808-df47ce510fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a755044,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783,PodSandboxId:00ef9d9de4935f8acfc77b0ab351002c788e80e38e771bcff098b89701e7af25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181398357407476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wcw97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dd50538-f54d-43f1-bd8a-b9d3131c13f7,},Annotations:map[string]string{io.kubernetes.container.hash: 498ae3bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de,PodSandboxId:82f16ee888ec1043cefb38b9d7dba8f6bebad894edbeff6633949f7347d576e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721181390656105332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l58xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feae4e89-4900-4399-b
d06-7d179280667d,},Annotations:map[string]string{io.kubernetes.container.hash: 25e04d7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181390629751214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632
252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745,PodSandboxId:2be3a62518d5db650a31c3c74384f6ad931d2d976853b9d4105e8b75fae7e552,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181387004809748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e63e752d96a0d9c33d7fe914b821e640,},Annota
tions:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787,PodSandboxId:cef3e414381ec6445255c6921dd61d89d4d21591fcb63bc40b5d2d1cf0943fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181386994139574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09041104cff8f86494c416db9d9c095a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e6fa5874,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060,PodSandboxId:ee225f609c7b0c28ffa1c5964b757e4195da42bac112e280589ba49de8834321,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181386925376800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae0f3175f741e0f48ddc242abc89638f,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509,PodSandboxId:3fdeb024796c574088a93f07258e8555fc75747b27851cf483e87e5c7798d920,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181386917321866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b052781dfa44cef0464609720ccead54,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 62b42a8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54b57710-f57e-40ca-8e7d-4d6735bb203c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fac56f23fdf5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   230f003f3ea34       storage-provisioner
	e47f2fca5b2ae       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   0ff7965ff7244       busybox
	110368a2f3e57       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   00ef9d9de4935       coredns-7db6d8ff4d-wcw97
	0012a63297ec6       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      13 minutes ago      Running             kube-proxy                1                   82f16ee888ec1       kube-proxy-l58xk
	51a6cb79762ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   230f003f3ea34       storage-provisioner
	211063fd97af0       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      13 minutes ago      Running             kube-scheduler            1                   2be3a62518d5d       kube-scheduler-embed-certs-940222
	b1af0adb58a0b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   cef3e414381ec       etcd-embed-certs-940222
	5e124648f9a37       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      13 minutes ago      Running             kube-controller-manager   1                   ee225f609c7b0       kube-controller-manager-embed-certs-940222
	ffa398702fb31       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      13 minutes ago      Running             kube-apiserver            1                   3fdeb024796c5       kube-apiserver-embed-certs-940222
	
	
	==> coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50006 - 17386 "HINFO IN 3415240562246251088.4894184447526837990. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010682562s
	
	
	==> describe nodes <==
	Name:               embed-certs-940222
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-940222
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=embed-certs-940222
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_47_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:47:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-940222
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:09:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:07:12 +0000   Wed, 17 Jul 2024 01:47:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:07:12 +0000   Wed, 17 Jul 2024 01:47:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:07:12 +0000   Wed, 17 Jul 2024 01:47:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:07:12 +0000   Wed, 17 Jul 2024 01:56:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.225
	  Hostname:    embed-certs-940222
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a278df33fdef4860a3e7518e7f996e0f
	  System UUID:                a278df33-fdef-4860-a3e7-518e7f996e0f
	  Boot ID:                    87f69d3f-fb13-496f-b419-ce5b68d79a00
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-wcw97                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-940222                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-940222             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-940222    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-l58xk                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-940222             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-rhp7b               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-940222 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-940222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-940222 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node embed-certs-940222 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-940222 event: Registered Node embed-certs-940222 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-940222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-940222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-940222 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-940222 event: Registered Node embed-certs-940222 in Controller
	
	
	==> dmesg <==
	[Jul17 01:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063983] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.054912] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.812591] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.502206] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.585517] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.398692] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.061606] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078084] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.165398] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.149231] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.278520] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.425711] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +0.063933] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.142913] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +4.562234] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.545851] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +4.196995] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.467916] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] <==
	{"level":"info","ts":"2024-07-17T01:56:27.446617Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"471aba4d800e3f5d","local-member-id":"7978524bf3afee6b","added-peer-id":"7978524bf3afee6b","added-peer-peer-urls":["https://192.168.72.225:2380"]}
	{"level":"info","ts":"2024-07-17T01:56:27.44681Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"471aba4d800e3f5d","local-member-id":"7978524bf3afee6b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:56:27.446925Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T01:56:27.451336Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T01:56:27.451621Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7978524bf3afee6b","initial-advertise-peer-urls":["https://192.168.72.225:2380"],"listen-peer-urls":["https://192.168.72.225:2380"],"advertise-client-urls":["https://192.168.72.225:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.225:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T01:56:27.451703Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T01:56:27.451925Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.225:2380"}
	{"level":"info","ts":"2024-07-17T01:56:27.452042Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.225:2380"}
	{"level":"info","ts":"2024-07-17T01:56:28.504499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:56:28.504619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:56:28.504695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b received MsgPreVoteResp from 7978524bf3afee6b at term 2"}
	{"level":"info","ts":"2024-07-17T01:56:28.504744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:56:28.504769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b received MsgVoteResp from 7978524bf3afee6b at term 3"}
	{"level":"info","ts":"2024-07-17T01:56:28.504796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:56:28.504822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7978524bf3afee6b elected leader 7978524bf3afee6b at term 3"}
	{"level":"info","ts":"2024-07-17T01:56:28.506489Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7978524bf3afee6b","local-member-attributes":"{Name:embed-certs-940222 ClientURLs:[https://192.168.72.225:2379]}","request-path":"/0/members/7978524bf3afee6b/attributes","cluster-id":"471aba4d800e3f5d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:56:28.506724Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:56:28.509507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:56:28.531332Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:56:28.531652Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:56:28.531686Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:56:28.533154Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.225:2379"}
	{"level":"info","ts":"2024-07-17T02:06:28.546275Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":798}
	{"level":"info","ts":"2024-07-17T02:06:28.556153Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":798,"took":"9.520869ms","hash":148558858,"current-db-size-bytes":2138112,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2138112,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-17T02:06:28.556224Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":148558858,"revision":798,"compact-revision":-1}
	
	
	==> kernel <==
	 02:10:01 up 13 min,  0 users,  load average: 0.17, 0.22, 0.17
	Linux embed-certs-940222 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] <==
	I0717 02:04:30.919325       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:06:29.918684       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:06:29.918813       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 02:06:30.919029       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:06:30.919117       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:06:30.919127       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:06:30.919184       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:06:30.919235       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:06:30.920293       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:07:30.919966       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:07:30.920075       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:07:30.920082       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:07:30.921286       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:07:30.921324       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:07:30.921332       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:09:30.920517       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:09:30.920946       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:09:30.920988       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:09:30.921960       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:09:30.922029       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:09:30.922064       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] <==
	I0717 02:04:13.290752       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:04:42.842801       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:04:43.297664       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:05:12.848562       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:05:13.308762       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:05:42.853830       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:05:43.317029       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:06:12.858284       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:06:13.325679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:06:42.863765       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:06:43.333934       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:07:12.868817       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:07:13.341207       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:07:40.317520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="300.755µs"
	E0717 02:07:42.874402       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:07:43.348455       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:07:52.319383       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="188.614µs"
	E0717 02:08:12.878982       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:08:13.357547       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:08:42.884257       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:08:43.364666       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:09:12.892376       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:09:13.372734       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:09:42.897696       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:09:43.380992       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] <==
	I0717 01:56:30.875744       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:56:30.907135       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.225"]
	I0717 01:56:30.977148       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:56:30.977274       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:56:30.977348       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:56:30.980199       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:56:30.980523       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:56:30.980624       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:56:30.982386       1 config.go:192] "Starting service config controller"
	I0717 01:56:30.982496       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:56:30.982547       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:56:30.982565       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:56:30.984635       1 config.go:319] "Starting node config controller"
	I0717 01:56:30.984667       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:56:31.082664       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:56:31.082743       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:56:31.084839       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] <==
	I0717 01:56:27.681269       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:56:29.886758       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:56:29.887055       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:56:29.887089       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:56:29.887158       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:56:29.927020       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:56:29.927128       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:56:29.932103       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:56:29.932217       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:56:29.932241       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:56:29.932256       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:56:30.032393       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:07:26 embed-certs-940222 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:07:26 embed-certs-940222 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:07:26 embed-certs-940222 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:07:26 embed-certs-940222 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:07:40 embed-certs-940222 kubelet[933]: E0717 02:07:40.299415     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:07:52 embed-certs-940222 kubelet[933]: E0717 02:07:52.299226     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:08:07 embed-certs-940222 kubelet[933]: E0717 02:08:07.298152     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:08:22 embed-certs-940222 kubelet[933]: E0717 02:08:22.298341     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:08:26 embed-certs-940222 kubelet[933]: E0717 02:08:26.322249     933 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:08:26 embed-certs-940222 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:08:26 embed-certs-940222 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:08:26 embed-certs-940222 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:08:26 embed-certs-940222 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:08:35 embed-certs-940222 kubelet[933]: E0717 02:08:35.299677     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:08:48 embed-certs-940222 kubelet[933]: E0717 02:08:48.299032     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:09:03 embed-certs-940222 kubelet[933]: E0717 02:09:03.298717     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:09:16 embed-certs-940222 kubelet[933]: E0717 02:09:16.300445     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:09:26 embed-certs-940222 kubelet[933]: E0717 02:09:26.322082     933 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:09:26 embed-certs-940222 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:09:26 embed-certs-940222 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:09:26 embed-certs-940222 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:09:26 embed-certs-940222 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:09:29 embed-certs-940222 kubelet[933]: E0717 02:09:29.298641     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:09:43 embed-certs-940222 kubelet[933]: E0717 02:09:43.299133     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:09:56 embed-certs-940222 kubelet[933]: E0717 02:09:56.298770     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	
	
	==> storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] <==
	I0717 01:56:30.769370       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 01:57:00.773605       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] <==
	I0717 01:57:01.617089       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:57:01.625960       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:57:01.626056       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:57:01.640791       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:57:01.640991       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-940222_c3536443-ec29-447d-af1d-f6bbbbd45845!
	I0717 01:57:01.643810       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f05084ec-ac5f-4bf7-b888-599003faf3d0", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-940222_c3536443-ec29-447d-af1d-f6bbbbd45845 became leader
	I0717 01:57:01.742297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-940222_c3536443-ec29-447d-af1d-f6bbbbd45845!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-940222 -n embed-certs-940222
E0717 02:10:03.264833   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-940222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-rhp7b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-940222 describe pod metrics-server-569cc877fc-rhp7b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-940222 describe pod metrics-server-569cc877fc-rhp7b: exit status 1 (60.006407ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-rhp7b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-940222 describe pod metrics-server-569cc877fc-rhp7b: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 02:02:58.379266   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 02:02:59.044849   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
E0717 02:03:00.389169   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-391501 -n no-preload-391501
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 02:11:15.364818748 +0000 UTC m=+6574.412699865
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-391501 -n no-preload-391501
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-391501 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-391501 logs -n 25: (2.131983777s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-894370 sudo cat                              | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo find                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo crio                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-894370                                       | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255698 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | disable-driver-mounts-255698                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:48 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-940222            | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-738184  | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-391501             | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-940222                 | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-901761        | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 02:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-738184       | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-391501                  | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:59 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-391501 --memory=2200                     | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 02:02 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901761             | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:51:47
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:51:47.395737   71929 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:51:47.396000   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396010   71929 out.go:304] Setting ErrFile to fd 2...
	I0717 01:51:47.396016   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396184   71929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:51:47.396684   71929 out.go:298] Setting JSON to false
	I0717 01:51:47.397549   71929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5649,"bootTime":1721175458,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:51:47.397606   71929 start.go:139] virtualization: kvm guest
	I0717 01:51:47.399758   71929 out.go:177] * [old-k8s-version-901761] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:51:47.400960   71929 notify.go:220] Checking for updates...
	I0717 01:51:47.400966   71929 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:51:47.402266   71929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:51:47.403356   71929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:51:47.404532   71929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:51:47.405524   71929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:51:47.406572   71929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:51:47.407935   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:51:47.408358   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.408427   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.422931   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0717 01:51:47.423315   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.423809   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.423831   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.424123   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.424259   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.426227   71929 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 01:51:47.427500   71929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:51:47.427770   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.427801   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.442080   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I0717 01:51:47.442438   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.442901   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.442924   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.443208   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.443382   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.476327   71929 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:51:47.477607   71929 start.go:297] selected driver: kvm2
	I0717 01:51:47.477620   71929 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.477762   71929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:51:47.478432   71929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.478541   71929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:51:47.493611   71929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:51:47.493967   71929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:51:47.494039   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:51:47.494056   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:51:47.494147   71929 start.go:340] cluster config:
	{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.494271   71929 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.496056   71929 out.go:177] * Starting "old-k8s-version-901761" primary control-plane node in "old-k8s-version-901761" cluster
	I0717 01:51:45.178864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:47.497229   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:51:47.497266   71929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:51:47.497279   71929 cache.go:56] Caching tarball of preloaded images
	I0717 01:51:47.497368   71929 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:51:47.497379   71929 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:51:47.497484   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:51:47.497671   71929 start.go:360] acquireMachinesLock for old-k8s-version-901761: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:51:51.258826   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:54.330879   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:00.410811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:03.482811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:09.562828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:12.634828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:18.714910   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:21.786892   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:27.866863   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:30.938805   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:37.022827   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:40.090853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:46.170839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:49.242854   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:55.322824   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:58.394792   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:04.474811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:07.546855   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:13.626861   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:16.698832   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:22.778828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:25.850864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:31.930814   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:35.002842   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:41.082839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:44.154796   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:50.234823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:53.306914   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:59.386835   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:02.458751   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:08.538853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:11.610833   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:17.690816   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:20.762793   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:26.842837   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:29.914866   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:35.994838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:39.066806   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:45.146846   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:48.218841   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:54.298823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:57.370838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:55:00.375050   71522 start.go:364] duration metric: took 3m54.700923144s to acquireMachinesLock for "default-k8s-diff-port-738184"
	I0717 01:55:00.375103   71522 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:00.375110   71522 fix.go:54] fixHost starting: 
	I0717 01:55:00.375500   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:00.375532   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:00.390583   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0717 01:55:00.390957   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:00.391392   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:00.391412   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:00.391704   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:00.391927   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:00.392069   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:00.393467   71522 fix.go:112] recreateIfNeeded on default-k8s-diff-port-738184: state=Stopped err=<nil>
	I0717 01:55:00.393508   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	W0717 01:55:00.393658   71522 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:00.395826   71522 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-738184" ...
	I0717 01:55:00.397256   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Start
	I0717 01:55:00.397401   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring networks are active...
	I0717 01:55:00.398079   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network default is active
	I0717 01:55:00.398390   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network mk-default-k8s-diff-port-738184 is active
	I0717 01:55:00.398710   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Getting domain xml...
	I0717 01:55:00.399275   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Creating domain...
	I0717 01:55:00.372573   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:00.372621   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.372933   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:55:00.372957   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.373131   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:55:00.374934   71146 machine.go:97] duration metric: took 4m37.428393808s to provisionDockerMachine
	I0717 01:55:00.374969   71146 fix.go:56] duration metric: took 4m37.449104762s for fixHost
	I0717 01:55:00.374974   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 4m37.449121677s
	W0717 01:55:00.374996   71146 start.go:714] error starting host: provision: host is not running
	W0717 01:55:00.375080   71146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 01:55:00.375088   71146 start.go:729] Will try again in 5 seconds ...
	I0717 01:55:01.590292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting to get IP...
	I0717 01:55:01.591187   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591589   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591657   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.591578   72583 retry.go:31] will retry after 266.165899ms: waiting for machine to come up
	I0717 01:55:01.859307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859724   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859751   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.859695   72583 retry.go:31] will retry after 282.941451ms: waiting for machine to come up
	I0717 01:55:02.144389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144756   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144787   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.144701   72583 retry.go:31] will retry after 327.203414ms: waiting for machine to come up
	I0717 01:55:02.473217   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473681   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473705   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.473606   72583 retry.go:31] will retry after 553.917043ms: waiting for machine to come up
	I0717 01:55:03.029379   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029762   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.029738   72583 retry.go:31] will retry after 617.312209ms: waiting for machine to come up
	I0717 01:55:03.648372   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648701   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648733   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.648670   72583 retry.go:31] will retry after 641.28503ms: waiting for machine to come up
	I0717 01:55:04.291493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.291986   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.292019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:04.291870   72583 retry.go:31] will retry after 1.133455116s: waiting for machine to come up
	I0717 01:55:05.426672   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426943   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426972   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:05.426892   72583 retry.go:31] will retry after 1.00384113s: waiting for machine to come up
	I0717 01:55:05.376907   71146 start.go:360] acquireMachinesLock for embed-certs-940222: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:55:06.432146   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432502   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432525   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:06.432477   72583 retry.go:31] will retry after 1.472142907s: waiting for machine to come up
	I0717 01:55:07.906974   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907407   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907437   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:07.907336   72583 retry.go:31] will retry after 1.775986179s: waiting for machine to come up
	I0717 01:55:09.685396   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685792   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685822   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:09.685756   72583 retry.go:31] will retry after 2.663700716s: waiting for machine to come up
	I0717 01:55:12.351616   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.351985   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.352017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:12.351921   72583 retry.go:31] will retry after 2.409004894s: waiting for machine to come up
	I0717 01:55:14.763493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763859   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763876   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:14.763828   72583 retry.go:31] will retry after 3.049843419s: waiting for machine to come up
	I0717 01:55:19.031713   71603 start.go:364] duration metric: took 4m8.751453112s to acquireMachinesLock for "no-preload-391501"
	I0717 01:55:19.031779   71603 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:19.031787   71603 fix.go:54] fixHost starting: 
	I0717 01:55:19.032306   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:19.032352   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:19.049376   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0717 01:55:19.049877   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:19.050387   71603 main.go:141] libmachine: Using API Version  1
	I0717 01:55:19.050409   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:19.050752   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:19.050935   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:19.051104   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 01:55:19.052805   71603 fix.go:112] recreateIfNeeded on no-preload-391501: state=Stopped err=<nil>
	I0717 01:55:19.052832   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	W0717 01:55:19.052989   71603 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:19.056667   71603 out.go:177] * Restarting existing kvm2 VM for "no-preload-391501" ...
	I0717 01:55:19.058078   71603 main.go:141] libmachine: (no-preload-391501) Calling .Start
	I0717 01:55:19.058314   71603 main.go:141] libmachine: (no-preload-391501) Ensuring networks are active...
	I0717 01:55:19.059126   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network default is active
	I0717 01:55:19.059466   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network mk-no-preload-391501 is active
	I0717 01:55:19.059958   71603 main.go:141] libmachine: (no-preload-391501) Getting domain xml...
	I0717 01:55:19.060746   71603 main.go:141] libmachine: (no-preload-391501) Creating domain...
	I0717 01:55:17.816307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.816746   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Found IP for machine: 192.168.39.170
	I0717 01:55:17.816765   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserving static IP address...
	I0717 01:55:17.816776   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has current primary IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.817337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserved static IP address: 192.168.39.170
	I0717 01:55:17.817366   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for SSH to be available...
	I0717 01:55:17.817389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.817420   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | skip adding static IP to network mk-default-k8s-diff-port-738184 - found existing host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"}
	I0717 01:55:17.817443   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Getting to WaitForSSH function...
	I0717 01:55:17.819693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820022   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.820056   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820171   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH client type: external
	I0717 01:55:17.820203   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa (-rw-------)
	I0717 01:55:17.820245   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:17.820259   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | About to run SSH command:
	I0717 01:55:17.820280   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | exit 0
	I0717 01:55:17.942987   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:17.943370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetConfigRaw
	I0717 01:55:17.943945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:17.946638   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.946993   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.947021   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.947268   71522 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/config.json ...
	I0717 01:55:17.947479   71522 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:17.947497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:17.947732   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:17.950032   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.950397   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950489   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:17.950664   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950827   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950959   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:17.951108   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:17.951300   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:17.951311   71522 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:18.051147   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:18.051180   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051421   71522 buildroot.go:166] provisioning hostname "default-k8s-diff-port-738184"
	I0717 01:55:18.051456   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051655   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.054480   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055024   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.055053   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055262   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.055473   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055643   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.055928   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.056077   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.056089   71522 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-738184 && echo "default-k8s-diff-port-738184" | sudo tee /etc/hostname
	I0717 01:55:18.170268   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-738184
	
	I0717 01:55:18.170299   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.173037   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.173369   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173485   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.173673   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173851   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173957   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.174110   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.174322   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.174349   71522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-738184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-738184/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-738184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:18.279963   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:18.279997   71522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:18.280030   71522 buildroot.go:174] setting up certificates
	I0717 01:55:18.280042   71522 provision.go:84] configureAuth start
	I0717 01:55:18.280054   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.280393   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:18.282887   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283201   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.283231   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.285399   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.285691   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285795   71522 provision.go:143] copyHostCerts
	I0717 01:55:18.285865   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:18.285884   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:18.285971   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:18.286084   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:18.286095   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:18.286129   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:18.286205   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:18.286214   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:18.286247   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:18.286313   71522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-738184 san=[127.0.0.1 192.168.39.170 default-k8s-diff-port-738184 localhost minikube]
	I0717 01:55:18.386547   71522 provision.go:177] copyRemoteCerts
	I0717 01:55:18.386627   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:18.386658   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.388930   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.389322   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389465   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.389662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.389804   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.389944   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.469031   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:18.493607   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 01:55:18.517024   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:18.539757   71522 provision.go:87] duration metric: took 259.702663ms to configureAuth
	I0717 01:55:18.539793   71522 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:18.540064   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:18.540178   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.542831   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543174   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.543196   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543388   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.543599   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.543843   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.544011   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.544172   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.544343   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.544362   71522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:18.804633   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:18.804690   71522 machine.go:97] duration metric: took 857.197634ms to provisionDockerMachine
	I0717 01:55:18.804706   71522 start.go:293] postStartSetup for "default-k8s-diff-port-738184" (driver="kvm2")
	I0717 01:55:18.804720   71522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:18.804743   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:18.805049   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:18.805073   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.807835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808127   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.808147   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808319   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.808497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.808670   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.808823   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.889297   71522 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:18.893587   71522 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:18.893615   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:18.893694   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:18.893779   71522 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:18.893886   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:18.903319   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:18.927700   71522 start.go:296] duration metric: took 122.979492ms for postStartSetup
	I0717 01:55:18.927748   71522 fix.go:56] duration metric: took 18.552636525s for fixHost
	I0717 01:55:18.927775   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.930483   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.930768   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.930791   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.931004   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.931192   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931511   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.931677   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.931873   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.931887   71522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:19.031515   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181319.004563133
	
	I0717 01:55:19.031541   71522 fix.go:216] guest clock: 1721181319.004563133
	I0717 01:55:19.031552   71522 fix.go:229] Guest: 2024-07-17 01:55:19.004563133 +0000 UTC Remote: 2024-07-17 01:55:18.927754613 +0000 UTC m=+253.390645105 (delta=76.80852ms)
	I0717 01:55:19.031611   71522 fix.go:200] guest clock delta is within tolerance: 76.80852ms
	I0717 01:55:19.031623   71522 start.go:83] releasing machines lock for "default-k8s-diff-port-738184", held for 18.656540342s
	I0717 01:55:19.031661   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.031940   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:19.034537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.034881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.034911   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.035036   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035557   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035750   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035822   71522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:19.035875   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.036000   71522 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:19.036027   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.038509   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038860   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.038892   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038935   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038982   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039156   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039328   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.039361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.039389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.039488   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.039537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039702   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.040047   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.140208   71522 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:19.146454   71522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:19.293584   71522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:19.300750   71522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:19.300817   71522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:19.321596   71522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:19.321621   71522 start.go:495] detecting cgroup driver to use...
	I0717 01:55:19.321684   71522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:19.337664   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:19.351856   71522 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:19.351922   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:19.366355   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:19.380735   71522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:19.495916   71522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:19.646426   71522 docker.go:233] disabling docker service ...
	I0717 01:55:19.646501   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:19.665764   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:19.683893   71522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:19.814704   71522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:19.958389   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:19.973223   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:19.992869   71522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:55:19.992937   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.003696   71522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:20.003762   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.014415   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.025303   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.036715   71522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:20.047872   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.059666   71522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.079479   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.092424   71522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:20.103225   71522 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:20.103284   71522 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:20.120620   71522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:20.136439   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:20.284796   71522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:20.427605   71522 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:20.427698   71522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:20.433477   71522 start.go:563] Will wait 60s for crictl version
	I0717 01:55:20.433537   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:55:20.437399   71522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:20.479192   71522 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:20.479289   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.507655   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.537084   71522 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:55:20.538435   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:20.541200   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:20.541531   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541772   71522 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:20.546261   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:20.559802   71522 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:20.559946   71522 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:55:20.560001   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:20.381503   71603 main.go:141] libmachine: (no-preload-391501) Waiting to get IP...
	I0717 01:55:20.382632   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.383105   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.383210   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.383077   72724 retry.go:31] will retry after 193.198351ms: waiting for machine to come up
	I0717 01:55:20.577611   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.578117   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.578145   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.578067   72724 retry.go:31] will retry after 254.406992ms: waiting for machine to come up
	I0717 01:55:20.834633   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.835088   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.835116   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.835057   72724 retry.go:31] will retry after 459.446617ms: waiting for machine to come up
	I0717 01:55:21.295939   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.296384   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.296409   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.296343   72724 retry.go:31] will retry after 515.654185ms: waiting for machine to come up
	I0717 01:55:21.813613   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.814140   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.814178   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.814104   72724 retry.go:31] will retry after 652.322198ms: waiting for machine to come up
	I0717 01:55:22.468223   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:22.468858   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:22.468897   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:22.468774   72724 retry.go:31] will retry after 767.220835ms: waiting for machine to come up
	I0717 01:55:23.237341   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:23.237685   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:23.237716   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:23.237633   72724 retry.go:31] will retry after 1.083873631s: waiting for machine to come up
	I0717 01:55:24.323463   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:24.323983   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:24.324011   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:24.323934   72724 retry.go:31] will retry after 1.255667305s: waiting for machine to come up
	I0717 01:55:20.597329   71522 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:55:20.597409   71522 ssh_runner.go:195] Run: which lz4
	I0717 01:55:20.602100   71522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:55:20.606863   71522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:55:20.606900   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:55:22.053002   71522 crio.go:462] duration metric: took 1.450939378s to copy over tarball
	I0717 01:55:22.053071   71522 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:24.356349   71522 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.303245698s)
	I0717 01:55:24.356378   71522 crio.go:469] duration metric: took 2.303353381s to extract the tarball
	I0717 01:55:24.356385   71522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:55:24.402866   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:24.446681   71522 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:55:24.446709   71522 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:55:24.446720   71522 kubeadm.go:934] updating node { 192.168.39.170 8444 v1.30.2 crio true true} ...
	I0717 01:55:24.446844   71522 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-738184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:24.446931   71522 ssh_runner.go:195] Run: crio config
	I0717 01:55:24.499717   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:24.499744   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:24.499759   71522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:24.499787   71522 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-738184 NodeName:default-k8s-diff-port-738184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:24.499965   71522 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-738184"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:24.500039   71522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:55:24.510488   71522 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:24.510568   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:24.520830   71522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 01:55:24.538018   71522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:55:24.556287   71522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 01:55:24.574973   71522 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:24.579058   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:24.591752   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:24.712285   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:24.729387   71522 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184 for IP: 192.168.39.170
	I0717 01:55:24.729411   71522 certs.go:194] generating shared ca certs ...
	I0717 01:55:24.729432   71522 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:24.729596   71522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:24.729650   71522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:24.729662   71522 certs.go:256] generating profile certs ...
	I0717 01:55:24.729776   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/client.key
	I0717 01:55:24.729847   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key.44902a6f
	I0717 01:55:24.729907   71522 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key
	I0717 01:55:24.730044   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:24.730086   71522 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:24.730099   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:24.730135   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:24.730183   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:24.730222   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:24.730277   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:24.731142   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:24.762240   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:24.788746   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:24.825379   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:24.853821   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 01:55:24.887105   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:55:24.910834   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:24.934566   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:55:24.959709   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:24.983722   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:25.007312   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:25.031576   71522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:25.049348   71522 ssh_runner.go:195] Run: openssl version
	I0717 01:55:25.055410   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:25.066104   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070616   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070675   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.076604   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:25.087284   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:25.098383   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103262   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103331   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.109170   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:25.119940   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:25.130829   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135659   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135734   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.141583   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:25.152770   71522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:25.157395   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:25.163543   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:25.169580   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:25.175754   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:25.181771   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:25.187935   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:25.193614   71522 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:25.193727   71522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:25.193770   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.230871   71522 cri.go:89] found id: ""
	I0717 01:55:25.230954   71522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:25.241336   71522 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:25.241357   71522 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:25.241410   71522 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:25.251637   71522 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:25.253030   71522 kubeconfig.go:125] found "default-k8s-diff-port-738184" server: "https://192.168.39.170:8444"
	I0717 01:55:25.255926   71522 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:25.265878   71522 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.170
	I0717 01:55:25.265915   71522 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:25.265927   71522 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:25.265982   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.305929   71522 cri.go:89] found id: ""
	I0717 01:55:25.306015   71522 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:25.322581   71522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:25.332334   71522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:25.332356   71522 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:25.332407   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:55:25.342132   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:25.342193   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:25.351628   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:55:25.360765   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:25.360833   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:25.370167   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.379057   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:25.379124   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.389470   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:55:25.399142   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:25.399210   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:25.409452   71522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:25.421509   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.545698   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.580838   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:25.581295   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:25.581322   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:25.581247   72724 retry.go:31] will retry after 1.354947672s: waiting for machine to come up
	I0717 01:55:26.937260   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:26.937746   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:26.937774   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:26.937696   72724 retry.go:31] will retry after 1.818074273s: waiting for machine to come up
	I0717 01:55:28.758015   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:28.758489   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:28.758517   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:28.758449   72724 retry.go:31] will retry after 2.782465023s: waiting for machine to come up
	I0717 01:55:26.599380   71522 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053644988s)
	I0717 01:55:26.599416   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.807765   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.878767   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.965940   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:26.966023   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.466587   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.966138   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.983649   71522 api_server.go:72] duration metric: took 1.017709312s to wait for apiserver process to appear ...
	I0717 01:55:27.983678   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:55:27.983701   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:27.984214   71522 api_server.go:269] stopped: https://192.168.39.170:8444/healthz: Get "https://192.168.39.170:8444/healthz": dial tcp 192.168.39.170:8444: connect: connection refused
	I0717 01:55:28.483780   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.862416   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.862464   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.862479   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.869667   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.869718   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.983899   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.988670   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:30.988704   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.484233   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.488939   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:31.488978   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.984611   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.988738   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:55:31.996182   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:55:31.996207   71522 api_server.go:131] duration metric: took 4.012523131s to wait for apiserver health ...
	I0717 01:55:31.996216   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:31.996222   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:31.998122   71522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:55:31.999536   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:55:32.010501   71522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:55:32.030227   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:55:32.039923   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:55:32.039954   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039988   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039998   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:55:32.040003   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:55:32.040013   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:55:32.040020   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:55:32.040033   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:55:32.040041   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:55:32.040046   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:55:32.040053   71522 system_pods.go:74] duration metric: took 9.802793ms to wait for pod list to return data ...
	I0717 01:55:32.040060   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:55:32.043233   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:55:32.043259   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:55:32.043270   71522 node_conditions.go:105] duration metric: took 3.202451ms to run NodePressure ...
	I0717 01:55:32.043285   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:32.350948   71522 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356119   71522 kubeadm.go:739] kubelet initialised
	I0717 01:55:32.356143   71522 kubeadm.go:740] duration metric: took 5.164025ms waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356153   71522 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:32.361501   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.366747   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366770   71522 pod_ready.go:81] duration metric: took 5.246954ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.366778   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366785   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.371049   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371066   71522 pod_ready.go:81] duration metric: took 4.275157ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.371073   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371078   71522 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.375338   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375361   71522 pod_ready.go:81] duration metric: took 4.27092ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.375369   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375379   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.434545   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434583   71522 pod_ready.go:81] duration metric: took 59.196717ms for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.434593   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434601   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.836139   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836178   71522 pod_ready.go:81] duration metric: took 401.568097ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.836194   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836212   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.234032   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234060   71522 pod_ready.go:81] duration metric: took 397.83937ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.234071   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234076   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.633953   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633981   71522 pod_ready.go:81] duration metric: took 399.893316ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.633992   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633998   71522 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:34.034511   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034560   71522 pod_ready.go:81] duration metric: took 400.544281ms for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:34.034574   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034583   71522 pod_ready.go:38] duration metric: took 1.678420144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:34.034599   71522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:55:34.049235   71522 ops.go:34] apiserver oom_adj: -16
	I0717 01:55:34.049261   71522 kubeadm.go:597] duration metric: took 8.807897214s to restartPrimaryControlPlane
	I0717 01:55:34.049272   71522 kubeadm.go:394] duration metric: took 8.855664434s to StartCluster
	I0717 01:55:34.049292   71522 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.049374   71522 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:55:34.050992   71522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.051239   71522 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:55:34.051307   71522 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:55:34.051409   71522 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051454   71522 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051465   71522 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:55:34.051497   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051511   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:34.051498   71522 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051502   71522 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051564   71522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-738184"
	I0717 01:55:34.051587   71522 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051612   71522 addons.go:243] addon metrics-server should already be in state true
	I0717 01:55:34.051686   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051803   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.051845   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052097   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052151   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052331   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052383   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.054788   71522 out.go:177] * Verifying Kubernetes components...
	I0717 01:55:34.056293   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0717 01:55:34.067821   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.067911   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068370   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068390   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068515   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068526   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0717 01:55:34.068535   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068709   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.068991   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068997   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.069278   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069320   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069529   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069560   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069611   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.069629   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.069977   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.070184   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.074013   71522 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.074036   71522 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:55:34.074062   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.074422   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.074463   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.085256   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0717 01:55:34.085694   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0717 01:55:34.085716   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086207   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086378   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086402   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.086785   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.086945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.086947   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086999   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.087327   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.087624   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.088695   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.089320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.090932   71522 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:55:34.090932   71522 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:31.543587   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:31.544073   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:31.544102   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:31.544012   72724 retry.go:31] will retry after 2.898539616s: waiting for machine to come up
	I0717 01:55:34.444315   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:34.444828   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:34.444870   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:34.444790   72724 retry.go:31] will retry after 4.252719028s: waiting for machine to come up
	I0717 01:55:34.092892   71522 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.092910   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:55:34.092926   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.092985   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:55:34.092993   71522 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:55:34.093003   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.095340   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0717 01:55:34.095840   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.096397   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.096434   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.096567   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.096819   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.096979   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097029   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097058   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097498   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.097536   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.097881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097897   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097899   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097923   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.098075   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098105   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098286   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098449   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.098461   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.113190   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0717 01:55:34.113544   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.114033   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.114059   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.114375   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.114575   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.116332   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.116544   71522 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.116563   71522 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:55:34.116583   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.119693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.119992   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.120017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.120457   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.120722   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.120965   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.121652   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.247964   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:34.266521   71522 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:34.370296   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:55:34.370318   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:55:34.380102   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.394620   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:55:34.394639   71522 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:55:34.409328   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.416653   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:34.416684   71522 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:55:34.445296   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:35.605781   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196419762s)
	I0717 01:55:35.605843   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605858   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605854   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160520147s)
	I0717 01:55:35.605778   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.225640358s)
	I0717 01:55:35.605929   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605944   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605988   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606007   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606293   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606300   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606309   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606315   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606319   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606329   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606333   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606349   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606357   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606371   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606398   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606410   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606424   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606640   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607811   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607852   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607866   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607874   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607892   71522 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-738184"
	I0717 01:55:35.607815   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607878   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607829   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607959   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607842   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.613691   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.613717   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.614019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.614025   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.614081   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.615871   71522 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0717 01:55:38.700025   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.700533   71603 main.go:141] libmachine: (no-preload-391501) Found IP for machine: 192.168.61.174
	I0717 01:55:38.700555   71603 main.go:141] libmachine: (no-preload-391501) Reserving static IP address...
	I0717 01:55:38.700572   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has current primary IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.701013   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.701033   71603 main.go:141] libmachine: (no-preload-391501) Reserved static IP address: 192.168.61.174
	I0717 01:55:38.701049   71603 main.go:141] libmachine: (no-preload-391501) DBG | skip adding static IP to network mk-no-preload-391501 - found existing host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"}
	I0717 01:55:38.701064   71603 main.go:141] libmachine: (no-preload-391501) DBG | Getting to WaitForSSH function...
	I0717 01:55:38.701080   71603 main.go:141] libmachine: (no-preload-391501) Waiting for SSH to be available...
	I0717 01:55:38.703218   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703577   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.703605   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703755   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH client type: external
	I0717 01:55:38.703773   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa (-rw-------)
	I0717 01:55:38.703791   71603 main.go:141] libmachine: (no-preload-391501) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:38.703809   71603 main.go:141] libmachine: (no-preload-391501) DBG | About to run SSH command:
	I0717 01:55:38.703817   71603 main.go:141] libmachine: (no-preload-391501) DBG | exit 0
	I0717 01:55:38.827046   71603 main.go:141] libmachine: (no-preload-391501) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:38.827413   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetConfigRaw
	I0717 01:55:38.828102   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:38.831229   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.831782   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.831814   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.832140   71603 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/config.json ...
	I0717 01:55:38.832347   71603 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:38.832367   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:38.832574   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.835302   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835710   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.835735   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835954   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.836173   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836345   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836521   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.836691   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.836928   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.836947   71603 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:38.943173   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:38.943213   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943491   71603 buildroot.go:166] provisioning hostname "no-preload-391501"
	I0717 01:55:38.943513   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943725   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.946396   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946872   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.946900   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946980   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.947164   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947339   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947518   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.947695   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.947849   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.947869   71603 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-391501 && echo "no-preload-391501" | sudo tee /etc/hostname
	I0717 01:55:39.070382   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-391501
	
	I0717 01:55:39.070429   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.073539   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.073904   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.073941   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.074203   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.074426   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074624   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.075132   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.075348   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.075373   71603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-391501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-391501/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-391501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:39.195604   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:39.195634   71603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:39.195649   71603 buildroot.go:174] setting up certificates
	I0717 01:55:39.195656   71603 provision.go:84] configureAuth start
	I0717 01:55:39.195665   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:39.195952   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:39.198409   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198792   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.198822   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198996   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.201509   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.201870   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.201901   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.202078   71603 provision.go:143] copyHostCerts
	I0717 01:55:39.202153   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:39.202166   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:39.202221   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:39.202313   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:39.202320   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:39.202339   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:39.202387   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:39.202394   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:39.202410   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:39.202456   71603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.no-preload-391501 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-391501]
	I0717 01:55:39.550166   71603 provision.go:177] copyRemoteCerts
	I0717 01:55:39.550224   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:39.550249   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.552616   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.552990   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.553020   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.553135   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.553298   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.553460   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.553559   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:39.638467   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:39.664166   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:55:39.689416   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:55:39.714130   71603 provision.go:87] duration metric: took 518.463378ms to configureAuth
	I0717 01:55:39.714159   71603 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:39.714362   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:55:39.714440   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.717269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717694   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.717722   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.718080   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718240   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718393   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.718621   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.718793   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.718809   71603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:39.982066   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:39.982095   71603 machine.go:97] duration metric: took 1.149734372s to provisionDockerMachine
	I0717 01:55:39.982110   71603 start.go:293] postStartSetup for "no-preload-391501" (driver="kvm2")
	I0717 01:55:39.982127   71603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:39.982147   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:39.982429   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:39.982445   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.984935   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985232   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.985269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985372   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.985553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.985793   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.986010   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.074439   71603 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:40.079515   71603 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:40.079541   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:40.079617   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:40.079708   71603 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:40.079831   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:40.090783   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:40.121212   71603 start.go:296] duration metric: took 139.087761ms for postStartSetup
	I0717 01:55:40.121257   71603 fix.go:56] duration metric: took 21.089468917s for fixHost
	I0717 01:55:40.121281   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.124208   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124517   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.124545   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124753   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.124940   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125119   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125269   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.125430   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:40.125626   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:40.125638   71603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:40.239538   71929 start.go:364] duration metric: took 3m52.741834986s to acquireMachinesLock for "old-k8s-version-901761"
	I0717 01:55:40.239610   71929 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:40.239618   71929 fix.go:54] fixHost starting: 
	I0717 01:55:40.240021   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:40.240054   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:40.257464   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0717 01:55:40.257866   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:40.258287   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:55:40.258311   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:40.258672   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:40.258871   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:40.259041   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetState
	I0717 01:55:40.260529   71929 fix.go:112] recreateIfNeeded on old-k8s-version-901761: state=Stopped err=<nil>
	I0717 01:55:40.260568   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	W0717 01:55:40.260721   71929 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:40.262590   71929 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-901761" ...
	I0717 01:55:35.617123   71522 addons.go:510] duration metric: took 1.565817066s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0717 01:55:36.270109   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:38.270489   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.270966   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.239384   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181340.205508074
	
	I0717 01:55:40.239409   71603 fix.go:216] guest clock: 1721181340.205508074
	I0717 01:55:40.239419   71603 fix.go:229] Guest: 2024-07-17 01:55:40.205508074 +0000 UTC Remote: 2024-07-17 01:55:40.121261572 +0000 UTC m=+269.976034747 (delta=84.246502ms)
	I0717 01:55:40.239445   71603 fix.go:200] guest clock delta is within tolerance: 84.246502ms
	I0717 01:55:40.239453   71603 start.go:83] releasing machines lock for "no-preload-391501", held for 21.207695176s
	I0717 01:55:40.239486   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.239768   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:40.242534   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.242923   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.242956   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.243159   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243649   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243826   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243924   71603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:40.243975   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.244045   71603 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:40.244071   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.246599   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.246958   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.246984   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247089   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247153   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247254   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.247401   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.247486   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.247510   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247579   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.247669   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247861   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.248031   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.248169   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.328497   71603 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:40.350092   71603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:40.497644   71603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:40.504094   71603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:40.504164   71603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:40.526752   71603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:40.526777   71603 start.go:495] detecting cgroup driver to use...
	I0717 01:55:40.526842   71603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:40.543537   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:40.557551   71603 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:40.557606   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:40.571755   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:40.585548   71603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:40.702991   71603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:40.849192   71603 docker.go:233] disabling docker service ...
	I0717 01:55:40.849276   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:40.864697   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:40.877940   71603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:41.043588   71603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:41.175359   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:41.191170   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:41.212440   71603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:55:41.212508   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.224335   71603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:41.224411   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.235721   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.247575   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.260018   71603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:41.271526   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.285999   71603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.307653   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.319272   71603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:41.330544   71603 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:41.330637   71603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:41.346698   71603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:41.361983   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:41.490052   71603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:41.639509   71603 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:41.639626   71603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:41.646714   71603 start.go:563] Will wait 60s for crictl version
	I0717 01:55:41.646793   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.650900   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:41.688112   71603 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:41.688188   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.717335   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.750767   71603 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:55:40.263857   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .Start
	I0717 01:55:40.264019   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring networks are active...
	I0717 01:55:40.264709   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network default is active
	I0717 01:55:40.265165   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network mk-old-k8s-version-901761 is active
	I0717 01:55:40.265581   71929 main.go:141] libmachine: (old-k8s-version-901761) Getting domain xml...
	I0717 01:55:40.266340   71929 main.go:141] libmachine: (old-k8s-version-901761) Creating domain...
	I0717 01:55:41.562582   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting to get IP...
	I0717 01:55:41.563329   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.563802   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.563890   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.563781   72905 retry.go:31] will retry after 216.264296ms: waiting for machine to come up
	I0717 01:55:41.781168   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.781662   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.781690   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.781629   72905 retry.go:31] will retry after 275.269814ms: waiting for machine to come up
	I0717 01:55:42.058127   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.058525   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.058564   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.058498   72905 retry.go:31] will retry after 348.024497ms: waiting for machine to come up
	I0717 01:55:41.752123   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:41.755114   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755571   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:41.755602   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755863   71603 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:41.760869   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:41.775414   71603 kubeadm.go:883] updating cluster {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:41.775563   71603 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:55:41.775609   71603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:41.815115   71603 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 01:55:41.815141   71603 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.815241   71603 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.815279   71603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.815290   71603 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.815304   71603 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:55:41.815239   71603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.815258   71603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817894   71603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.817939   71603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817892   71603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.817888   71603 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:55:41.818033   71603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.817891   71603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.817900   71603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.817978   71603 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.014545   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 01:55:42.030064   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.034517   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.123584   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.130122   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.134935   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.136170   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.173650   71603 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 01:55:42.173707   71603 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.173718   71603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 01:55:42.173755   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.173767   71603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.173820   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.219689   71603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 01:55:42.219745   71603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.219792   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.240802   71603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 01:55:42.240847   71603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.240907   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.251152   71603 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 01:55:42.251189   71603 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.251225   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254790   71603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 01:55:42.254849   71603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.254886   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.254895   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254916   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.254951   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.255006   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.257984   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.267440   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.395407   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395471   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395513   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395522   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395558   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395582   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:55:42.395592   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395663   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:42.397740   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:55:42.397813   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:42.420577   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420602   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420619   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420640   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420662   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420676   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420705   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 01:55:42.420711   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420738   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 01:55:43.737662   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581683   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.160996964s)
	I0717 01:55:44.581730   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 01:55:44.581753   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581754   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.161058602s)
	I0717 01:55:44.581788   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 01:55:44.581810   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581858   71603 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 01:55:44.581900   71603 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581928   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.270830   71522 node_ready.go:49] node "default-k8s-diff-port-738184" has status "Ready":"True"
	I0717 01:55:41.270853   71522 node_ready.go:38] duration metric: took 7.004304151s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:41.270868   71522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:41.278587   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285210   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.285236   71522 pod_ready.go:81] duration metric: took 6.623347ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285250   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291110   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.291133   71522 pod_ready.go:81] duration metric: took 5.874809ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291145   71522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297614   71522 pod_ready.go:92] pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.297636   71522 pod_ready.go:81] duration metric: took 6.483783ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297645   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305307   71522 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.305335   71522 pod_ready.go:81] duration metric: took 1.007681338s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305349   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472190   71522 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.472222   71522 pod_ready.go:81] duration metric: took 166.864153ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472236   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871756   71522 pod_ready.go:92] pod "kube-proxy-c4n94" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.871780   71522 pod_ready.go:81] duration metric: took 399.536375ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871789   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272858   71522 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:43.272895   71522 pod_ready.go:81] duration metric: took 401.098971ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272913   71522 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:45.281019   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:42.407813   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.408311   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.408346   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.408218   72905 retry.go:31] will retry after 388.717436ms: waiting for machine to come up
	I0717 01:55:42.798810   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.799378   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.799411   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.799323   72905 retry.go:31] will retry after 661.391346ms: waiting for machine to come up
	I0717 01:55:43.462189   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:43.462654   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:43.462686   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:43.462603   72905 retry.go:31] will retry after 636.142497ms: waiting for machine to come up
	I0717 01:55:44.100416   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.100852   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.100874   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.100808   72905 retry.go:31] will retry after 781.652918ms: waiting for machine to come up
	I0717 01:55:44.883650   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.884137   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.884170   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.884088   72905 retry.go:31] will retry after 1.238608293s: waiting for machine to come up
	I0717 01:55:46.124419   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:46.124911   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:46.124942   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:46.124854   72905 retry.go:31] will retry after 1.169011508s: waiting for machine to come up
	I0717 01:55:47.295202   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:47.295679   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:47.295715   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:47.295632   72905 retry.go:31] will retry after 1.723987128s: waiting for machine to come up
	I0717 01:55:47.004929   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.423090292s)
	I0717 01:55:47.004968   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 01:55:47.004990   71603 ssh_runner.go:235] Completed: which crictl: (2.423045276s)
	I0717 01:55:47.005021   71603 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:47.005053   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:47.005067   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:49.097703   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.092610651s)
	I0717 01:55:49.097747   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 01:55:49.097776   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097836   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097776   71603 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.092700925s)
	I0717 01:55:49.097953   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 01:55:49.098050   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:47.781233   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.786039   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.020883   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:49.021363   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:49.021396   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:49.021279   72905 retry.go:31] will retry after 2.098481296s: waiting for machine to come up
	I0717 01:55:51.121693   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:51.122253   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:51.122282   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:51.122192   72905 retry.go:31] will retry after 2.624839429s: waiting for machine to come up
	I0717 01:55:50.560197   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.462322087s)
	I0717 01:55:50.560292   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 01:55:50.560323   71603 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560252   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.462175943s)
	I0717 01:55:50.560373   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560388   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 01:55:53.630471   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.070071936s)
	I0717 01:55:53.630509   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 01:55:53.630529   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:53.630604   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:52.280585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:54.779606   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:53.748796   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:53.749348   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:53.749390   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:53.749298   72905 retry.go:31] will retry after 3.47930356s: waiting for machine to come up
	I0717 01:55:57.231901   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232407   71929 main.go:141] libmachine: (old-k8s-version-901761) Found IP for machine: 192.168.50.44
	I0717 01:55:57.232437   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has current primary IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232449   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserving static IP address...
	I0717 01:55:57.232880   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserved static IP address: 192.168.50.44
	I0717 01:55:57.232928   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.232937   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting for SSH to be available...
	I0717 01:55:57.232952   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | skip adding static IP to network mk-old-k8s-version-901761 - found existing host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"}
	I0717 01:55:57.232971   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Getting to WaitForSSH function...
	I0717 01:55:57.235007   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235208   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.235242   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235421   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH client type: external
	I0717 01:55:57.235461   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa (-rw-------)
	I0717 01:55:57.235502   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:57.235516   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | About to run SSH command:
	I0717 01:55:57.235530   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | exit 0
	I0717 01:55:57.362619   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:57.363106   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetConfigRaw
	I0717 01:55:57.363760   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.366213   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366636   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.366666   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366958   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:55:57.367165   71929 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:57.367188   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:57.367392   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.370017   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370354   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.370371   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370577   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.370765   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.370935   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.371084   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.371325   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.371506   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.371518   71929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:58.531714   71146 start.go:364] duration metric: took 53.154741813s to acquireMachinesLock for "embed-certs-940222"
	I0717 01:55:58.531773   71146 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:58.531784   71146 fix.go:54] fixHost starting: 
	I0717 01:55:58.532189   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:58.532237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:58.549026   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0717 01:55:58.549491   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:58.550001   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:55:58.550025   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:58.550363   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:58.550536   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:55:58.550707   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:55:58.552236   71146 fix.go:112] recreateIfNeeded on embed-certs-940222: state=Stopped err=<nil>
	I0717 01:55:58.552259   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	W0717 01:55:58.552397   71146 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:58.554487   71146 out.go:177] * Restarting existing kvm2 VM for "embed-certs-940222" ...
	I0717 01:55:57.478893   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:57.478921   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479123   71929 buildroot.go:166] provisioning hostname "old-k8s-version-901761"
	I0717 01:55:57.479142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479330   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.482163   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482531   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.482579   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482739   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.482937   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483111   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483264   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.483454   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.483632   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.483648   71929 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-901761 && echo "old-k8s-version-901761" | sudo tee /etc/hostname
	I0717 01:55:57.613409   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-901761
	
	I0717 01:55:57.613440   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.616228   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616614   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.616655   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616860   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.617040   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617222   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617383   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.617574   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.617778   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.617794   71929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-901761' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-901761/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-901761' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:57.737648   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:57.737683   71929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:57.737703   71929 buildroot.go:174] setting up certificates
	I0717 01:55:57.737711   71929 provision.go:84] configureAuth start
	I0717 01:55:57.737721   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.738028   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.741089   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741532   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.741556   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741741   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.744444   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.744917   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.744947   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.745111   71929 provision.go:143] copyHostCerts
	I0717 01:55:57.745185   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:57.745202   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:57.745273   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:57.745393   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:57.745405   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:57.745437   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:57.745517   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:57.745527   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:57.745545   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:57.745602   71929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-901761 san=[127.0.0.1 192.168.50.44 localhost minikube old-k8s-version-901761]
	I0717 01:55:57.830872   71929 provision.go:177] copyRemoteCerts
	I0717 01:55:57.830939   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:57.830972   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.833463   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833741   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.833777   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833887   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.834083   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.834250   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.834403   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:57.918346   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:57.954250   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:57.979770   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:55:58.005161   71929 provision.go:87] duration metric: took 267.436975ms to configureAuth
	I0717 01:55:58.005193   71929 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:58.005412   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:55:58.005493   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.008255   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008626   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.008663   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008833   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.009006   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009170   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009298   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.009464   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.009616   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.009639   71929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:58.281081   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:58.281112   71929 machine.go:97] duration metric: took 913.933405ms to provisionDockerMachine
	I0717 01:55:58.281121   71929 start.go:293] postStartSetup for "old-k8s-version-901761" (driver="kvm2")
	I0717 01:55:58.281130   71929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:58.281144   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.281497   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:58.281533   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.284465   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.284812   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.284840   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.285023   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.285207   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.285441   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.285650   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.377149   71929 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:58.381709   71929 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:58.381731   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:58.381798   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:58.381887   71929 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:58.381972   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:58.392916   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:58.420677   71929 start.go:296] duration metric: took 139.542186ms for postStartSetup
	I0717 01:55:58.420721   71929 fix.go:56] duration metric: took 18.181102939s for fixHost
	I0717 01:55:58.420745   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.423582   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.423961   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.423989   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.424169   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.424372   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424557   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424693   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.424859   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.425040   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.425053   71929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:58.531563   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181358.508735025
	
	I0717 01:55:58.531585   71929 fix.go:216] guest clock: 1721181358.508735025
	I0717 01:55:58.531594   71929 fix.go:229] Guest: 2024-07-17 01:55:58.508735025 +0000 UTC Remote: 2024-07-17 01:55:58.420726806 +0000 UTC m=+251.057483904 (delta=88.008219ms)
	I0717 01:55:58.531617   71929 fix.go:200] guest clock delta is within tolerance: 88.008219ms
	I0717 01:55:58.531624   71929 start.go:83] releasing machines lock for "old-k8s-version-901761", held for 18.292040224s
	I0717 01:55:58.531655   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.531981   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:58.534476   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.534967   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.534996   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.535258   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535802   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535990   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.536105   71929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:58.536183   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.536244   71929 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:58.536275   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.539139   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539401   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539534   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539560   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539768   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.539815   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539845   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539968   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540000   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.540116   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540243   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.540332   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540468   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.628291   71929 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:58.656964   71929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:58.806516   71929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:58.815051   71929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:58.815113   71929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:58.838575   71929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:58.838596   71929 start.go:495] detecting cgroup driver to use...
	I0717 01:55:58.838662   71929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:58.855728   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:58.875221   71929 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:58.875285   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:58.889781   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:58.903832   71929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:59.026815   71929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:59.173879   71929 docker.go:233] disabling docker service ...
	I0717 01:55:59.173964   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:59.192906   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:59.208262   71929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:59.368178   71929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:59.500335   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:59.514795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:59.535553   71929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:55:59.535631   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.548304   71929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:59.548376   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.563066   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.578452   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.593447   71929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:59.606239   71929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:59.617051   71929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:59.617118   71929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:59.632601   71929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:59.645034   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:59.812343   71929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:59.969366   71929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:59.969444   71929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:59.974286   71929 start.go:563] Will wait 60s for crictl version
	I0717 01:55:59.974335   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:55:59.978280   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:00.020399   71929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:00.020489   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.049811   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.081952   71929 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:55:55.703286   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.07265838s)
	I0717 01:55:55.703312   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 01:55:55.703342   71603 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:55.703396   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:56.651520   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 01:55:56.651563   71603 cache_images.go:123] Successfully loaded all cached images
	I0717 01:55:56.651569   71603 cache_images.go:92] duration metric: took 14.83641531s to LoadCachedImages
	I0717 01:55:56.651581   71603 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:55:56.651702   71603 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-391501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:56.651770   71603 ssh_runner.go:195] Run: crio config
	I0717 01:55:56.700129   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:55:56.700152   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:56.700162   71603 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:56.700189   71603 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-391501 NodeName:no-preload-391501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:56.700315   71603 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-391501"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:56.700372   71603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:55:56.711859   71603 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:56.711936   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:56.721994   71603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 01:55:56.738335   71603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:55:56.755198   71603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 01:55:56.772467   71603 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:56.777580   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:56.792767   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:56.913075   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:56.930746   71603 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501 for IP: 192.168.61.174
	I0717 01:55:56.930768   71603 certs.go:194] generating shared ca certs ...
	I0717 01:55:56.930783   71603 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:56.930929   71603 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:56.930968   71603 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:56.930978   71603 certs.go:256] generating profile certs ...
	I0717 01:55:56.931050   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/client.key
	I0717 01:55:56.931112   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key.a30174c9
	I0717 01:55:56.931153   71603 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key
	I0717 01:55:56.931292   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:56.931331   71603 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:56.931344   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:56.931373   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:56.931404   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:56.931434   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:56.931478   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:56.932180   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:56.971111   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:57.016791   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:57.049766   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:57.078139   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:55:57.109781   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:55:57.137912   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:57.165141   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:55:57.190210   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:57.214366   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:57.239518   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:57.265505   71603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:57.283773   71603 ssh_runner.go:195] Run: openssl version
	I0717 01:55:57.289846   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:57.300434   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305370   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305456   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.311765   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:57.322769   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:57.334122   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338774   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338823   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.344721   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:57.356476   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:57.368672   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374055   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374107   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.380256   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:57.392428   71603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:57.397593   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:57.404378   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:57.411094   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:57.418536   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:57.425312   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:57.431841   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:57.438615   71603 kubeadm.go:392] StartCluster: {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:57.438696   71603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:57.438782   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.482932   71603 cri.go:89] found id: ""
	I0717 01:55:57.482993   71603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:57.493813   71603 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:57.493832   71603 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:57.493872   71603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:57.504757   71603 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:57.505655   71603 kubeconfig.go:125] found "no-preload-391501" server: "https://192.168.61.174:8443"
	I0717 01:55:57.507634   71603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:57.517990   71603 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I0717 01:55:57.518025   71603 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:57.518038   71603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:57.518090   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.557504   71603 cri.go:89] found id: ""
	I0717 01:55:57.557588   71603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:57.574074   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:57.583703   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:57.583724   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:57.583768   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:55:57.593924   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:57.593992   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:57.606945   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:55:57.616803   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:57.616847   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:57.627215   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.637121   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:57.637179   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.646291   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:55:57.655314   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:57.655372   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:57.666994   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:57.677582   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:57.798148   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.316598   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.518419797s)
	I0717 01:55:59.316629   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.581666   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.675003   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.748682   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:59.748771   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:56.781465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:59.280394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:00.083384   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:56:00.086085   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086454   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:56:00.086494   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086710   71929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:00.091322   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:00.104102   71929 kubeadm.go:883] updating cluster {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:00.104237   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:56:00.104309   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:00.152445   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:00.152537   71929 ssh_runner.go:195] Run: which lz4
	I0717 01:56:00.156760   71929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:00.161123   71929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:00.161149   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:56:02.031804   71929 crio.go:462] duration metric: took 1.875087246s to copy over tarball
	I0717 01:56:02.031904   71929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:58.556014   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Start
	I0717 01:55:58.556171   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring networks are active...
	I0717 01:55:58.556866   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network default is active
	I0717 01:55:58.557237   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network mk-embed-certs-940222 is active
	I0717 01:55:58.557686   71146 main.go:141] libmachine: (embed-certs-940222) Getting domain xml...
	I0717 01:55:58.558375   71146 main.go:141] libmachine: (embed-certs-940222) Creating domain...
	I0717 01:55:59.917419   71146 main.go:141] libmachine: (embed-certs-940222) Waiting to get IP...
	I0717 01:55:59.918379   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:55:59.918849   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:55:59.918908   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:55:59.918833   73097 retry.go:31] will retry after 248.560075ms: waiting for machine to come up
	I0717 01:56:00.169337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.169877   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.169898   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.169837   73097 retry.go:31] will retry after 380.159418ms: waiting for machine to come up
	I0717 01:56:00.551472   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.552033   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.552076   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.551987   73097 retry.go:31] will retry after 439.990107ms: waiting for machine to come up
	I0717 01:56:00.993776   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.994337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.994351   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.994319   73097 retry.go:31] will retry after 415.462036ms: waiting for machine to come up
	I0717 01:56:01.412114   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:01.412508   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:01.412535   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:01.412484   73097 retry.go:31] will retry after 660.852153ms: waiting for machine to come up
	I0717 01:56:02.075095   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.075519   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.075541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.075498   73097 retry.go:31] will retry after 788.200532ms: waiting for machine to come up
	I0717 01:56:00.249300   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.749610   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.823943   71603 api_server.go:72] duration metric: took 1.075254107s to wait for apiserver process to appear ...
	I0717 01:56:00.823980   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:00.824006   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:00.825286   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:01.325032   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:01.281044   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:03.281329   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:05.092637   71929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060698331s)
	I0717 01:56:05.092674   71929 crio.go:469] duration metric: took 3.060839356s to extract the tarball
	I0717 01:56:05.092682   71929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:05.135461   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:05.170789   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:05.170814   71929 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:56:05.170853   71929 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.170884   71929 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.170908   71929 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.170961   71929 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:56:05.171077   71929 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.171126   71929 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.171138   71929 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.171462   71929 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172182   71929 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:56:05.172224   71929 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.172296   71929 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.172362   71929 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172415   71929 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.172449   71929 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.372794   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415131   71929 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:56:05.415181   71929 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415231   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.419179   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.446530   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:56:05.452583   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:56:05.485692   71929 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:56:05.485734   71929 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:56:05.485780   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.486154   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.487346   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.489408   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.490486   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:56:05.494929   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.499420   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.593505   71929 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:56:05.593587   71929 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.593638   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.632564   71929 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:56:05.632615   71929 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.632667   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657745   71929 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:56:05.657792   71929 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.657852   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657863   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:56:05.657908   71929 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:56:05.657943   71929 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.657958   71929 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:56:05.657976   71929 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.657980   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658004   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658037   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.658077   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.671679   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.671708   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.736572   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:56:05.736599   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:56:05.736671   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.758178   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:56:05.758210   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:56:05.787948   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:56:06.882199   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:07.025117   71929 cache_images.go:92] duration metric: took 1.854284265s to LoadCachedImages
	W0717 01:56:07.025227   71929 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0717 01:56:07.025245   71929 kubeadm.go:934] updating node { 192.168.50.44 8443 v1.20.0 crio true true} ...
	I0717 01:56:07.025378   71929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-901761 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:07.025465   71929 ssh_runner.go:195] Run: crio config
	I0717 01:56:07.081517   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:56:07.081543   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:07.081560   71929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:07.081584   71929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.44 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901761 NodeName:old-k8s-version-901761 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:56:07.081749   71929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-901761"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:07.081833   71929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:56:07.092233   71929 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:07.092335   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:07.102086   71929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0717 01:56:07.121538   71929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:07.139112   71929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0717 01:56:07.157397   71929 ssh_runner.go:195] Run: grep 192.168.50.44	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:07.161818   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.44	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:07.174723   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:07.307484   71929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:07.325948   71929 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761 for IP: 192.168.50.44
	I0717 01:56:07.325974   71929 certs.go:194] generating shared ca certs ...
	I0717 01:56:07.326002   71929 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.326164   71929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:07.326216   71929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:07.326229   71929 certs.go:256] generating profile certs ...
	I0717 01:56:07.326351   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.key
	I0717 01:56:07.326416   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5
	I0717 01:56:07.326461   71929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key
	I0717 01:56:07.326630   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:07.326668   71929 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:07.326681   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:07.326700   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:07.326724   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:07.326767   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:07.326828   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:07.327702   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:07.377671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:02.864980   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.865620   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.865656   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.865503   73097 retry.go:31] will retry after 1.00461953s: waiting for machine to come up
	I0717 01:56:03.871702   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:03.872187   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:03.872215   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:03.872133   73097 retry.go:31] will retry after 1.15731846s: waiting for machine to come up
	I0717 01:56:05.030767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:05.031263   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:05.031285   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:05.031209   73097 retry.go:31] will retry after 1.704165162s: waiting for machine to come up
	I0717 01:56:06.737975   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:06.738337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:06.738386   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:06.738307   73097 retry.go:31] will retry after 2.014062128s: waiting for machine to come up
	I0717 01:56:06.326066   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:06.326112   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:05.780615   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:08.281127   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:07.413171   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:07.443671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:07.482883   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:56:07.527280   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:07.571200   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:07.612296   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:07.638012   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:07.662018   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:07.688033   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:07.721827   71929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:07.741517   71929 ssh_runner.go:195] Run: openssl version
	I0717 01:56:07.747466   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:07.758615   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763382   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763439   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.769358   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:07.781802   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:07.792763   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797629   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797681   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.803879   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:07.815479   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:07.828292   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832769   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832829   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.838958   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:07.850108   71929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:07.854758   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:07.860661   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:07.866484   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:07.872302   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:07.878252   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:07.884275   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:07.890148   71929 kubeadm.go:392] StartCluster: {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:07.890264   71929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:07.890343   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:07.930081   71929 cri.go:89] found id: ""
	I0717 01:56:07.930153   71929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:07.941371   71929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:07.941396   71929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:07.941445   71929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:07.955229   71929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:07.957263   71929 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:07.959002   71929 kubeconfig.go:62] /home/jenkins/minikube-integration/19264-3908/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-901761" cluster setting kubeconfig missing "old-k8s-version-901761" context setting]
	I0717 01:56:07.960384   71929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.962748   71929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:07.973815   71929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.44
	I0717 01:56:07.973851   71929 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:07.973864   71929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:07.973933   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:08.020169   71929 cri.go:89] found id: ""
	I0717 01:56:08.020247   71929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:08.038015   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:08.049272   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:08.049294   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:08.049336   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:08.058953   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:08.059025   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:08.069034   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:08.078748   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:08.078817   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:08.089660   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.099521   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:08.099583   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.109831   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:08.120340   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:08.120400   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:08.130884   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:08.141008   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:08.275189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.006841   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.255401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.376659   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.475840   71929 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:09.475937   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:09.976926   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.476192   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.976705   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.476386   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.976459   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:08.753835   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:08.754316   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:08.754347   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:08.754264   73097 retry.go:31] will retry after 2.005810517s: waiting for machine to come up
	I0717 01:56:10.761600   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:10.762022   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:10.762053   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:10.761980   73097 retry.go:31] will retry after 2.631438855s: waiting for machine to come up
	I0717 01:56:11.327297   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:11.327348   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:10.779534   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:13.278417   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:15.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:12.476819   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:12.976633   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.476076   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.476885   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.976972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.476823   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.976917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.476765   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.976609   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.395592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:13.395949   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:13.395991   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:13.395905   73097 retry.go:31] will retry after 3.565162998s: waiting for machine to come up
	I0717 01:56:16.964948   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965424   71146 main.go:141] libmachine: (embed-certs-940222) Found IP for machine: 192.168.72.225
	I0717 01:56:16.965455   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has current primary IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965465   71146 main.go:141] libmachine: (embed-certs-940222) Reserving static IP address...
	I0717 01:56:16.966065   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.966092   71146 main.go:141] libmachine: (embed-certs-940222) DBG | skip adding static IP to network mk-embed-certs-940222 - found existing host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"}
	I0717 01:56:16.966107   71146 main.go:141] libmachine: (embed-certs-940222) Reserved static IP address: 192.168.72.225
	I0717 01:56:16.966122   71146 main.go:141] libmachine: (embed-certs-940222) Waiting for SSH to be available...
	I0717 01:56:16.966150   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Getting to WaitForSSH function...
	I0717 01:56:16.968287   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968642   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.968688   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968758   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH client type: external
	I0717 01:56:16.968782   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa (-rw-------)
	I0717 01:56:16.968842   71146 main.go:141] libmachine: (embed-certs-940222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:56:16.968872   71146 main.go:141] libmachine: (embed-certs-940222) DBG | About to run SSH command:
	I0717 01:56:16.968888   71146 main.go:141] libmachine: (embed-certs-940222) DBG | exit 0
	I0717 01:56:17.090641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | SSH cmd err, output: <nil>: 
	I0717 01:56:17.091120   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetConfigRaw
	I0717 01:56:17.091720   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.094205   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.094592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094810   71146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/config.json ...
	I0717 01:56:17.095001   71146 machine.go:94] provisionDockerMachine start ...
	I0717 01:56:17.095022   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:17.095223   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.097395   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097680   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.097707   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097848   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.098021   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098170   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098311   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.098491   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.098683   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.098695   71146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:56:17.203054   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:56:17.203080   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203364   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:56:17.203402   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.206404   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.206826   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.206868   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.207076   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.207282   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207471   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207611   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.207793   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.207985   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.207997   71146 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-940222 && echo "embed-certs-940222" | sudo tee /etc/hostname
	I0717 01:56:17.326485   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-940222
	
	I0717 01:56:17.326512   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.329226   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329629   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.329659   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329834   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.329996   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330148   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330265   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.330417   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.330619   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.330642   71146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-940222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-940222/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-940222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:56:17.439258   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:56:17.439285   71146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:56:17.439315   71146 buildroot.go:174] setting up certificates
	I0717 01:56:17.439324   71146 provision.go:84] configureAuth start
	I0717 01:56:17.439332   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.439656   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.442348   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442765   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.442796   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442976   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.445418   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.445767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.445803   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.446000   71146 provision.go:143] copyHostCerts
	I0717 01:56:17.446081   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:56:17.446098   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:56:17.446171   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:56:17.446265   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:56:17.446272   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:56:17.446292   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:56:17.446346   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:56:17.446353   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:56:17.446370   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:56:17.446418   71146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.embed-certs-940222 san=[127.0.0.1 192.168.72.225 embed-certs-940222 localhost minikube]
	I0717 01:56:17.578140   71146 provision.go:177] copyRemoteCerts
	I0717 01:56:17.578195   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:56:17.578221   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.581141   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581432   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.581457   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581697   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.581892   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.582038   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.582219   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:17.664867   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:56:17.691053   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:56:17.715816   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:56:17.742153   71146 provision.go:87] duration metric: took 302.817653ms to configureAuth
	I0717 01:56:17.742180   71146 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:56:17.742405   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:17.742486   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.745102   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745369   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.745398   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745608   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.745820   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746019   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746209   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.746510   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.746738   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.746761   71146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:56:18.017395   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:56:18.017420   71146 machine.go:97] duration metric: took 922.405002ms to provisionDockerMachine
	I0717 01:56:18.017433   71146 start.go:293] postStartSetup for "embed-certs-940222" (driver="kvm2")
	I0717 01:56:18.017449   71146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:56:18.017469   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.017817   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:56:18.017846   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.020599   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021051   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.021081   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021228   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.021410   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.021556   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.021660   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.101432   71146 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:56:18.105722   71146 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:56:18.105742   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:56:18.105797   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:56:18.105866   71146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:56:18.105944   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:56:18.115228   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:18.139857   71146 start.go:296] duration metric: took 122.411322ms for postStartSetup
	I0717 01:56:18.139924   71146 fix.go:56] duration metric: took 19.608111597s for fixHost
	I0717 01:56:18.139951   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.142466   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.142865   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.142886   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.143098   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.143262   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143444   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143662   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.143852   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:18.144022   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:18.144033   71146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:56:18.243604   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181378.218663213
	
	I0717 01:56:18.243635   71146 fix.go:216] guest clock: 1721181378.218663213
	I0717 01:56:18.243644   71146 fix.go:229] Guest: 2024-07-17 01:56:18.218663213 +0000 UTC Remote: 2024-07-17 01:56:18.139933424 +0000 UTC m=+355.354069584 (delta=78.729789ms)
	I0717 01:56:18.243662   71146 fix.go:200] guest clock delta is within tolerance: 78.729789ms
	I0717 01:56:18.243667   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 19.711916707s
	I0717 01:56:18.243684   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.243952   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:18.246454   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.246881   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.246907   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.247135   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247618   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247828   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247919   71146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:56:18.247958   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.248050   71146 ssh_runner.go:195] Run: cat /version.json
	I0717 01:56:18.248074   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.250520   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250914   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.250952   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250973   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251222   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251403   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251463   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.251495   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.251668   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251747   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.251817   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251975   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.252103   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.351600   71146 ssh_runner.go:195] Run: systemctl --version
	I0717 01:56:18.357586   71146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:56:18.503767   71146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:56:18.511637   71146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:56:18.511724   71146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:56:18.530209   71146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:56:18.530235   71146 start.go:495] detecting cgroup driver to use...
	I0717 01:56:18.530303   71146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:56:18.551740   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:56:18.566975   71146 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:56:18.567044   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:56:18.585100   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:56:18.601151   71146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:56:18.735644   71146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:56:18.895436   71146 docker.go:233] disabling docker service ...
	I0717 01:56:18.895505   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:56:18.910354   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:56:18.922999   71146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:56:19.065365   71146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:56:19.179337   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:56:19.194454   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:56:19.213281   71146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:56:19.213339   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.223531   71146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:56:19.223594   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.233691   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.243695   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.255192   71146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:56:19.266082   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.276861   71146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.295903   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.306114   71146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:56:19.316226   71146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:56:19.316275   71146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:56:19.329402   71146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:56:19.340622   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:19.456624   71146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:56:19.605945   71146 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:56:19.606051   71146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:56:19.611067   71146 start.go:563] Will wait 60s for crictl version
	I0717 01:56:19.611116   71146 ssh_runner.go:195] Run: which crictl
	I0717 01:56:19.615065   71146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:19.662925   71146 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:19.662989   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.693240   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.722332   71146 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:56:16.328318   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:16.328371   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:17.780821   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:19.780921   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:17.476562   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:17.976663   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.476958   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.976722   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.476641   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.976079   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.476899   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.976553   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.476087   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.976659   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.723930   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:19.726730   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727084   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:19.727107   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727314   71146 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:19.731814   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:19.745514   71146 kubeadm.go:883] updating cluster {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:19.745622   71146 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:56:19.745677   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:19.782922   71146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:56:19.782988   71146 ssh_runner.go:195] Run: which lz4
	I0717 01:56:19.786946   71146 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:19.791298   71146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:19.791323   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:56:21.230910   71146 crio.go:462] duration metric: took 1.443984707s to copy over tarball
	I0717 01:56:21.231003   71146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:56:21.328607   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:21.328654   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.345118   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": read tcp 192.168.61.1:36190->192.168.61.174:8443: read: connection reset by peer
	I0717 01:56:21.824753   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.825500   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:22.325079   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:22.280465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:24.779729   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:22.475994   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:22.976928   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.476906   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.975980   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.476208   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.976090   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.476425   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.976072   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.976180   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.517174   71146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.286133857s)
	I0717 01:56:23.517200   71146 crio.go:469] duration metric: took 2.286263798s to extract the tarball
	I0717 01:56:23.517210   71146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:23.554084   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:23.603831   71146 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:56:23.603861   71146 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:56:23.603871   71146 kubeadm.go:934] updating node { 192.168.72.225 8443 v1.30.2 crio true true} ...
	I0717 01:56:23.604004   71146 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-940222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:23.604087   71146 ssh_runner.go:195] Run: crio config
	I0717 01:56:23.658775   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:23.658794   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:23.658803   71146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:23.658826   71146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.225 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-940222 NodeName:embed-certs-940222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:56:23.659007   71146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-940222"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:23.659092   71146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:56:23.669971   71146 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:23.670042   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:23.680949   71146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 01:56:23.698917   71146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:23.716218   71146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 01:56:23.733971   71146 ssh_runner.go:195] Run: grep 192.168.72.225	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:23.738112   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:23.750915   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:23.894690   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:23.913418   71146 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222 for IP: 192.168.72.225
	I0717 01:56:23.913440   71146 certs.go:194] generating shared ca certs ...
	I0717 01:56:23.913456   71146 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:23.913630   71146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:23.913703   71146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:23.913729   71146 certs.go:256] generating profile certs ...
	I0717 01:56:23.913856   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/client.key
	I0717 01:56:23.913926   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key.d13a776d
	I0717 01:56:23.913968   71146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key
	I0717 01:56:23.914081   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:23.914123   71146 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:23.914134   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:23.914161   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:23.914188   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:23.914214   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:23.914256   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:23.914925   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:23.961346   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:24.006765   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:24.036852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:24.064984   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 01:56:24.090778   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:24.116146   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:24.142429   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:24.168427   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:24.193691   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:24.218852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:24.242932   71146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:24.261434   71146 ssh_runner.go:195] Run: openssl version
	I0717 01:56:24.267358   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:24.280319   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285286   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285358   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.291896   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:24.304027   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:24.315542   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320212   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320283   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.326123   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:24.339982   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:24.352301   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357023   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357078   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.363112   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:24.375910   71146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:24.380986   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:24.387276   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:24.393718   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:24.400367   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:24.406600   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:24.413161   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:24.420455   71146 kubeadm.go:392] StartCluster: {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:24.420578   71146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:24.420643   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.460702   71146 cri.go:89] found id: ""
	I0717 01:56:24.460792   71146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:24.472047   71146 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:24.472064   71146 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:24.472105   71146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:24.483092   71146 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:24.484146   71146 kubeconfig.go:125] found "embed-certs-940222" server: "https://192.168.72.225:8443"
	I0717 01:56:24.486112   71146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:24.497462   71146 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.225
	I0717 01:56:24.497496   71146 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:24.497511   71146 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:24.497571   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.541423   71146 cri.go:89] found id: ""
	I0717 01:56:24.541486   71146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:24.563272   71146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:24.574859   71146 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:24.574883   71146 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:24.574930   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:24.584960   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:24.585022   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:24.595950   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:24.605686   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:24.605775   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:24.616191   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.625954   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:24.626009   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.636254   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:24.648853   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:24.648961   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:24.660491   71146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:24.675329   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:24.795437   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:25.895383   71146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099913319s)
	I0717 01:56:25.895411   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.116274   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.286149   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.355208   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:26.355296   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.855578   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.355880   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.371616   71146 api_server.go:72] duration metric: took 1.016410291s to wait for apiserver process to appear ...
	I0717 01:56:27.371642   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:27.371671   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:27.325875   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:27.325920   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:26.780264   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.279376   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.836783   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.836811   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.836823   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.883657   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.883684   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.883695   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.895244   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.895270   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:30.371799   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.375903   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.375926   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:30.872627   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.876799   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.876830   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:31.372402   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:31.376723   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 01:56:31.382638   71146 api_server.go:141] control plane version: v1.30.2
	I0717 01:56:31.382663   71146 api_server.go:131] duration metric: took 4.011014381s to wait for apiserver health ...
	I0717 01:56:31.382672   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:31.382679   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:31.384436   71146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:27.476313   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.976700   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.476585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.976008   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.477040   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.976892   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.476912   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.976626   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.476786   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.976148   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.385974   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:31.396977   71146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:31.415740   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:31.425268   71146 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:31.425306   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:31.425313   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:56:31.425320   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:56:31.425328   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:56:31.425332   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 01:56:31.425337   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:56:31.425344   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:31.425350   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 01:56:31.425360   71146 system_pods.go:74] duration metric: took 9.598959ms to wait for pod list to return data ...
	I0717 01:56:31.425368   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:31.429053   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:31.429075   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:31.429084   71146 node_conditions.go:105] duration metric: took 3.710466ms to run NodePressure ...
	I0717 01:56:31.429098   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:31.699456   71146 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703803   71146 kubeadm.go:739] kubelet initialised
	I0717 01:56:31.703825   71146 kubeadm.go:740] duration metric: took 4.345324ms waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703835   71146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:31.708962   71146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.712850   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712871   71146 pod_ready.go:81] duration metric: took 3.888169ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.712879   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712891   71146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.717134   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717156   71146 pod_ready.go:81] duration metric: took 4.256764ms for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.717163   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717169   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.721479   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721498   71146 pod_ready.go:81] duration metric: took 4.321032ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.721508   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721515   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.819188   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819217   71146 pod_ready.go:81] duration metric: took 97.692306ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.819226   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819231   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.219730   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219766   71146 pod_ready.go:81] duration metric: took 400.526796ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.219775   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219782   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.619930   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619961   71146 pod_ready.go:81] duration metric: took 400.172543ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.619971   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619978   71146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:33.019223   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019252   71146 pod_ready.go:81] duration metric: took 399.266573ms for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:33.019263   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019271   71146 pod_ready.go:38] duration metric: took 1.315427432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:33.019291   71146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:56:33.032094   71146 ops.go:34] apiserver oom_adj: -16
	I0717 01:56:33.032116   71146 kubeadm.go:597] duration metric: took 8.56004698s to restartPrimaryControlPlane
	I0717 01:56:33.032125   71146 kubeadm.go:394] duration metric: took 8.611681052s to StartCluster
	I0717 01:56:33.032140   71146 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.032204   71146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:33.033963   71146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.034198   71146 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:56:33.034337   71146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:56:33.034405   71146 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-940222"
	I0717 01:56:33.034425   71146 addons.go:69] Setting metrics-server=true in profile "embed-certs-940222"
	I0717 01:56:33.034467   71146 addons.go:234] Setting addon metrics-server=true in "embed-certs-940222"
	W0717 01:56:33.034481   71146 addons.go:243] addon metrics-server should already be in state true
	I0717 01:56:33.034516   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034465   71146 addons.go:69] Setting default-storageclass=true in profile "embed-certs-940222"
	I0717 01:56:33.034469   71146 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-940222"
	I0717 01:56:33.034589   71146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-940222"
	W0717 01:56:33.034632   71146 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:56:33.034411   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:33.034725   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034963   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.034992   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035052   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035093   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035199   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.036051   71146 out.go:177] * Verifying Kubernetes components...
	I0717 01:56:33.037606   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:33.051343   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I0717 01:56:33.051970   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.052483   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.052516   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.052671   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0717 01:56:33.052887   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.053016   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.053397   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.053443   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.053760   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.053775   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0717 01:56:33.053779   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054125   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.054139   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.054336   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.054625   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.054656   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054984   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.055524   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.055563   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.057648   71146 addons.go:234] Setting addon default-storageclass=true in "embed-certs-940222"
	W0717 01:56:33.057668   71146 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:56:33.057699   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.058003   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.058036   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.070476   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0717 01:56:33.070717   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0717 01:56:33.071094   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071289   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071648   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071665   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.071841   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071863   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.072171   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072293   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072357   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.072581   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.073298   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0717 01:56:33.073745   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.074224   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.074237   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.074585   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.074690   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.075032   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.075054   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.075361   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.077495   71146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:56:33.077496   71146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:33.079446   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:56:33.079460   71146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:56:33.079480   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.080373   71146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.080386   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:56:33.080401   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.083272   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083527   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083623   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.083641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083899   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084099   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084168   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.084184   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.084273   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.084331   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084463   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.084748   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084890   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.085028   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.092382   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0717 01:56:33.092826   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.093401   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.093418   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.094409   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.094576   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.096442   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.096730   71146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.096750   71146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:56:33.096768   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.099802   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100290   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.100368   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100472   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.100625   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.100760   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.100849   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.229494   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:33.246459   71146 node_ready.go:35] waiting up to 6m0s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:33.400804   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:56:33.400824   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:56:33.411866   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.413220   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.426485   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:56:33.426506   71146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:56:33.476707   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:33.476729   71146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:56:33.539095   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:34.542027   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130125192s)
	I0717 01:56:34.542089   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542102   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542103   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.128853338s)
	I0717 01:56:34.542139   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542151   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542420   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542442   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542442   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542447   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542450   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542468   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542474   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542483   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542505   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542517   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542711   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542727   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542715   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542835   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542847   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.549135   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.549160   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.549405   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.549428   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616065   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076933862s)
	I0717 01:56:34.616127   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616142   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616429   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616479   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616489   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.616499   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616541   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616784   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616800   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616810   71146 addons.go:475] Verifying addon metrics-server=true in "embed-certs-940222"
	I0717 01:56:34.619698   71146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:56:32.326261   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:32.326310   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:31.779064   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:33.780671   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:32.475986   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:32.976812   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.476601   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.976667   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.476897   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.976610   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.476444   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.976859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.476092   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.976979   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.620987   71146 addons.go:510] duration metric: took 1.586659462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:56:35.250360   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.251933   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.326685   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:37.326726   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:39.977828   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:39.977860   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:39.977877   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.002499   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:40.002532   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:36.280516   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:38.779351   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:40.324290   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.329888   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.329914   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:40.824413   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.831375   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.831407   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:41.324677   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:41.333259   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 01:56:41.341378   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:56:41.341426   71603 api_server.go:131] duration metric: took 40.517438405s to wait for apiserver health ...
	I0717 01:56:41.341438   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:56:41.341447   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:41.343489   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:37.476813   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:37.976779   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.476554   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.976791   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.476946   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.976044   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.476526   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.976315   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.476688   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.976203   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.750483   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:40.249907   71146 node_ready.go:49] node "embed-certs-940222" has status "Ready":"True"
	I0717 01:56:40.249934   71146 node_ready.go:38] duration metric: took 7.003442258s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:40.249945   71146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:40.255811   71146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762773   71146 pod_ready.go:92] pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:40.762795   71146 pod_ready.go:81] duration metric: took 506.956885ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762806   71146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:42.768945   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:41.344846   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:41.360339   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:41.385845   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:41.409812   71603 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:41.409843   71603 system_pods.go:61] "coredns-5cfdc65f69-ztqz8" [7c9caec8-56b6-4faa-9410-0528f108696c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:41.409849   71603 system_pods.go:61] "etcd-no-preload-391501" [603f01a1-2b07-4d1d-be14-4da4a9f1e1b2] Running
	I0717 01:56:41.409854   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [7733c5b6-5e30-472b-920d-3849f2849f7b] Running
	I0717 01:56:41.409860   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [c1afab7e-9b46-4940-94ec-e62ebc10f406] Running
	I0717 01:56:41.409865   71603 system_pods.go:61] "kube-proxy-zbqhw" [26056c12-35cd-4a3e-b40a-1eca055bd1e2] Running
	I0717 01:56:41.409869   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [98f81994-9d2a-45b8-9719-90e181ee5d6f] Running
	I0717 01:56:41.409877   71603 system_pods.go:61] "metrics-server-78fcd8795b-g9x96" [86a6a2c3-ae04-486d-9751-0cc801f9fbfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:41.409887   71603 system_pods.go:61] "storage-provisioner" [8b938905-d8e1-4129-8426-5e31a05d38db] Running
	I0717 01:56:41.409895   71603 system_pods.go:74] duration metric: took 24.018074ms to wait for pod list to return data ...
	I0717 01:56:41.409906   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:41.418825   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:41.418856   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:41.418868   71603 node_conditions.go:105] duration metric: took 8.953821ms to run NodePressure ...
	I0717 01:56:41.418892   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:41.713730   71603 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:41.719162   71603 retry.go:31] will retry after 180.435127ms: kubelet not initialised
	I0717 01:56:41.906299   71603 retry.go:31] will retry after 320.946038ms: kubelet not initialised
	I0717 01:56:42.232875   71603 retry.go:31] will retry after 423.072333ms: kubelet not initialised
	I0717 01:56:42.661412   71603 retry.go:31] will retry after 1.138026932s: kubelet not initialised
	I0717 01:56:43.809525   71603 retry.go:31] will retry after 1.187704503s: kubelet not initialised
	I0717 01:56:45.009815   71603 kubeadm.go:739] kubelet initialised
	I0717 01:56:45.009839   71603 kubeadm.go:740] duration metric: took 3.296082732s waiting for restarted kubelet to initialise ...
	I0717 01:56:45.009850   71603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:45.021149   71603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.780159   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:43.279699   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:45.280407   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:42.476301   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:42.976939   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.477021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.976910   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.476766   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.976415   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.476987   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.976666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.476735   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.976643   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.770078   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.269496   71146 pod_ready.go:92] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.269524   71146 pod_ready.go:81] duration metric: took 6.506711113s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.269538   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277267   71146 pod_ready.go:92] pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.277294   71146 pod_ready.go:81] duration metric: took 7.747271ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277309   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286697   71146 pod_ready.go:92] pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.286715   71146 pod_ready.go:81] duration metric: took 9.397698ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286723   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291876   71146 pod_ready.go:92] pod "kube-proxy-l58xk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.291897   71146 pod_ready.go:81] duration metric: took 5.168432ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291905   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296201   71146 pod_ready.go:92] pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.296215   71146 pod_ready.go:81] duration metric: took 4.304055ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296222   71146 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.027495   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:49.028127   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.779497   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:50.279065   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.476576   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:47.976502   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.476634   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.976299   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.476069   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.976086   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.476859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.976441   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.476217   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.976585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.303729   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.802778   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.029194   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:53.528363   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.778915   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:54.780173   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.476652   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:52.976136   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.976168   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.476176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.976049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.476464   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.976802   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.308491   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:56.802797   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:55.528547   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.533612   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.030406   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.278908   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:59.279393   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.476661   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:57.976021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.976940   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.476773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.976397   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.476591   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.976189   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.476917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.976263   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.806045   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.807112   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.529203   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.028677   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:01.779903   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:03.780163   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.476048   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:02.976019   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.476604   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.976602   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.477004   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.976726   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.476934   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.975985   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.476331   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.976185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.302031   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.303601   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.803763   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.528021   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:09.528499   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.780204   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:08.279630   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.476887   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:07.975972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.476034   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.976678   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:09.476927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:09.477010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:09.513328   71929 cri.go:89] found id: ""
	I0717 01:57:09.513352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.513361   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:09.513368   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:09.513418   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:09.551203   71929 cri.go:89] found id: ""
	I0717 01:57:09.551228   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.551237   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:09.551244   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:09.551308   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:09.585321   71929 cri.go:89] found id: ""
	I0717 01:57:09.585352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.585363   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:09.585370   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:09.585427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:09.623977   71929 cri.go:89] found id: ""
	I0717 01:57:09.624004   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.624012   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:09.624019   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:09.624078   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:09.663338   71929 cri.go:89] found id: ""
	I0717 01:57:09.663367   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.663374   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:09.663380   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:09.663425   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:09.696381   71929 cri.go:89] found id: ""
	I0717 01:57:09.696412   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.696423   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:09.696436   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:09.696482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:09.735892   71929 cri.go:89] found id: ""
	I0717 01:57:09.735922   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.735932   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:09.735944   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:09.736006   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:09.775878   71929 cri.go:89] found id: ""
	I0717 01:57:09.775909   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.775919   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:09.775929   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:09.775942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:09.830021   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:09.830057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:09.844753   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:09.844783   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:09.985140   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:09.985165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:09.985179   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:10.049946   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:10.049984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:10.310038   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.805565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:11.529122   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:14.028939   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:10.779935   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:13.278388   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:15.280027   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.592959   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:12.608385   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:12.608467   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:12.649900   71929 cri.go:89] found id: ""
	I0717 01:57:12.649931   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.649942   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:12.649950   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:12.650021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:12.684915   71929 cri.go:89] found id: ""
	I0717 01:57:12.684941   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.684948   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:12.684956   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:12.685010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:12.727718   71929 cri.go:89] found id: ""
	I0717 01:57:12.727758   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.727766   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:12.727788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:12.727864   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:12.767212   71929 cri.go:89] found id: ""
	I0717 01:57:12.767236   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.767244   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:12.767249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:12.767295   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:12.806301   71929 cri.go:89] found id: ""
	I0717 01:57:12.806320   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.806327   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:12.806332   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:12.806405   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:12.843118   71929 cri.go:89] found id: ""
	I0717 01:57:12.843151   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.843162   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:12.843170   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:12.843245   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:12.876671   71929 cri.go:89] found id: ""
	I0717 01:57:12.876697   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.876707   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:12.876714   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:12.876790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:12.916201   71929 cri.go:89] found id: ""
	I0717 01:57:12.916226   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.916232   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:12.916240   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:12.916250   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:12.970346   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:12.970385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:12.985029   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:12.985053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:13.068314   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:13.068340   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:13.068352   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:13.147862   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:13.147897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:15.703130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:15.717081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:15.717160   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:15.757513   71929 cri.go:89] found id: ""
	I0717 01:57:15.757538   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.757545   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:15.757552   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:15.757599   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:15.794185   71929 cri.go:89] found id: ""
	I0717 01:57:15.794218   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.794231   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:15.794238   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:15.794300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:15.830589   71929 cri.go:89] found id: ""
	I0717 01:57:15.830619   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.830628   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:15.830634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:15.830694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:15.869673   71929 cri.go:89] found id: ""
	I0717 01:57:15.869702   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.869713   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:15.869720   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:15.869782   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:15.909225   71929 cri.go:89] found id: ""
	I0717 01:57:15.909257   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.909267   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:15.909278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:15.909343   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:15.944389   71929 cri.go:89] found id: ""
	I0717 01:57:15.944417   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.944424   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:15.944430   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:15.944490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:15.982871   71929 cri.go:89] found id: ""
	I0717 01:57:15.982898   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.982907   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:15.982915   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:15.982983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:16.025674   71929 cri.go:89] found id: ""
	I0717 01:57:16.025701   71929 logs.go:276] 0 containers: []
	W0717 01:57:16.025711   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:16.025721   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:16.025736   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:16.111608   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:16.111627   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:16.111638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:16.184650   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:16.184689   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:16.230647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:16.230693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:16.286675   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:16.286710   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:15.303141   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.304891   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:16.029794   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.529463   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.780034   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:20.279882   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.802487   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:18.817483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:18.817562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:18.861623   71929 cri.go:89] found id: ""
	I0717 01:57:18.861653   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.861664   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:18.861671   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:18.861733   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:18.901335   71929 cri.go:89] found id: ""
	I0717 01:57:18.901359   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.901367   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:18.901372   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:18.901427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:18.936477   71929 cri.go:89] found id: ""
	I0717 01:57:18.936508   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.936518   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:18.936524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:18.936581   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:18.971056   71929 cri.go:89] found id: ""
	I0717 01:57:18.971087   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.971098   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:18.971106   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:18.971157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:19.005399   71929 cri.go:89] found id: ""
	I0717 01:57:19.005431   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.005453   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:19.005460   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:19.005525   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:19.040218   71929 cri.go:89] found id: ""
	I0717 01:57:19.040242   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.040250   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:19.040257   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:19.040317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:19.073365   71929 cri.go:89] found id: ""
	I0717 01:57:19.073392   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.073402   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:19.073409   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:19.073471   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:19.108670   71929 cri.go:89] found id: ""
	I0717 01:57:19.108701   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.108713   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:19.108725   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:19.108743   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:19.186077   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:19.186111   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.232181   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:19.232214   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:19.288713   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:19.288755   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:19.303089   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:19.303115   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:19.386372   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:21.886666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:21.900905   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:21.900966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:21.934955   71929 cri.go:89] found id: ""
	I0717 01:57:21.934979   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.934987   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:21.934993   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:21.935036   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:21.972180   71929 cri.go:89] found id: ""
	I0717 01:57:21.972203   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.972211   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:21.972217   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:21.972271   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:22.010452   71929 cri.go:89] found id: ""
	I0717 01:57:22.010479   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.010487   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:22.010493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:22.010547   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:22.045824   71929 cri.go:89] found id: ""
	I0717 01:57:22.045888   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.045902   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:22.045911   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:22.045984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:22.084734   71929 cri.go:89] found id: ""
	I0717 01:57:22.084760   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.084769   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:22.084774   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:22.084842   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:22.119808   71929 cri.go:89] found id: ""
	I0717 01:57:22.119838   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.119846   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:22.119852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:22.119910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:22.157537   71929 cri.go:89] found id: ""
	I0717 01:57:22.157583   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.157610   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:22.157620   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:22.157687   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:22.196021   71929 cri.go:89] found id: ""
	I0717 01:57:22.196052   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.196062   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:22.196079   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:22.196094   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:22.274350   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:22.274373   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:22.274386   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:22.364363   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:22.364401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.803506   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.306698   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:21.028767   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:23.527943   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:24.529027   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.529064   71603 pod_ready.go:81] duration metric: took 39.50788355s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.529078   71603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534655   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.534680   71603 pod_ready.go:81] duration metric: took 5.594492ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534691   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539602   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.539622   71603 pod_ready.go:81] duration metric: took 4.923891ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539631   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544475   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.544516   71603 pod_ready.go:81] duration metric: took 4.862078ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544532   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549173   71603 pod_ready.go:92] pod "kube-proxy-zbqhw" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.549193   71603 pod_ready.go:81] duration metric: took 4.653986ms for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549203   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925916   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.925944   71603 pod_ready.go:81] duration metric: took 376.73343ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925959   71603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:22.779802   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:25.280281   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.410052   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:22.410092   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:22.462289   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:22.462326   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.978560   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:24.992533   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:24.992601   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:25.027708   71929 cri.go:89] found id: ""
	I0717 01:57:25.027746   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.027754   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:25.027760   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:25.027809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:25.066946   71929 cri.go:89] found id: ""
	I0717 01:57:25.066974   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.066985   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:25.066992   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:25.067051   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:25.107209   71929 cri.go:89] found id: ""
	I0717 01:57:25.107238   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.107248   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:25.107254   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:25.107300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:25.141548   71929 cri.go:89] found id: ""
	I0717 01:57:25.141577   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.141587   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:25.141594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:25.141652   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:25.175822   71929 cri.go:89] found id: ""
	I0717 01:57:25.175853   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.175861   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:25.175866   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:25.175917   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:25.215672   71929 cri.go:89] found id: ""
	I0717 01:57:25.215705   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.215718   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:25.215726   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:25.215786   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:25.260392   71929 cri.go:89] found id: ""
	I0717 01:57:25.260422   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.260434   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:25.260442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:25.260510   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:25.309953   71929 cri.go:89] found id: ""
	I0717 01:57:25.309981   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.309990   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:25.309999   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:25.310013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:25.414204   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:25.414229   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:25.414244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:25.501849   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:25.501883   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:25.545129   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:25.545163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:25.599948   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:25.599984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.803870   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.302993   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:26.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.932999   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.280455   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:29.778817   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.115776   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:28.129710   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:28.129776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:28.165380   71929 cri.go:89] found id: ""
	I0717 01:57:28.165409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.165419   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:28.165425   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:28.165473   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:28.199225   71929 cri.go:89] found id: ""
	I0717 01:57:28.199251   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.199259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:28.199264   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:28.199314   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:28.235564   71929 cri.go:89] found id: ""
	I0717 01:57:28.235585   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.235593   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:28.235598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:28.235649   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:28.270377   71929 cri.go:89] found id: ""
	I0717 01:57:28.270409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.270427   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:28.270435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:28.270488   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:28.310132   71929 cri.go:89] found id: ""
	I0717 01:57:28.310156   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.310163   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:28.310168   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:28.310222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:28.347590   71929 cri.go:89] found id: ""
	I0717 01:57:28.347619   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.347630   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:28.347638   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:28.347696   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:28.387953   71929 cri.go:89] found id: ""
	I0717 01:57:28.387988   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.388001   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:28.388010   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:28.388072   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:28.428788   71929 cri.go:89] found id: ""
	I0717 01:57:28.428811   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.428818   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:28.428826   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:28.428838   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:28.487411   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:28.487465   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:28.501121   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:28.501152   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:28.576296   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:28.576320   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:28.576335   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:28.660246   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:28.660288   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:31.201238   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:31.221132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:31.221192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:31.279839   71929 cri.go:89] found id: ""
	I0717 01:57:31.279867   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.279876   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:31.279884   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:31.279943   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:31.359764   71929 cri.go:89] found id: ""
	I0717 01:57:31.359796   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.359807   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:31.359814   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:31.359873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:31.397045   71929 cri.go:89] found id: ""
	I0717 01:57:31.397077   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.397087   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:31.397094   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:31.397157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:31.441356   71929 cri.go:89] found id: ""
	I0717 01:57:31.441388   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.441397   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:31.441404   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:31.441459   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:31.484014   71929 cri.go:89] found id: ""
	I0717 01:57:31.484040   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.484053   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:31.484060   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:31.484124   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:31.520686   71929 cri.go:89] found id: ""
	I0717 01:57:31.520714   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.520725   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:31.520733   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:31.520792   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:31.557300   71929 cri.go:89] found id: ""
	I0717 01:57:31.557326   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.557334   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:31.557339   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:31.557387   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:31.597753   71929 cri.go:89] found id: ""
	I0717 01:57:31.597782   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.597792   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:31.597804   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:31.597818   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:31.656796   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:31.656837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:31.671287   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:31.671311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:31.742752   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:31.742772   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:31.742784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:31.828154   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:31.828186   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:29.303279   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.303332   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.434410   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.778853   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.780535   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:34.368947   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:34.384323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:34.384402   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:34.421138   71929 cri.go:89] found id: ""
	I0717 01:57:34.421171   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.421182   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:34.421190   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:34.421263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:34.459077   71929 cri.go:89] found id: ""
	I0717 01:57:34.459105   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.459116   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:34.459123   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:34.459180   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:34.492987   71929 cri.go:89] found id: ""
	I0717 01:57:34.493016   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.493027   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:34.493038   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:34.493098   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:34.527801   71929 cri.go:89] found id: ""
	I0717 01:57:34.527827   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.527836   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:34.527841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:34.527890   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:34.562877   71929 cri.go:89] found id: ""
	I0717 01:57:34.562904   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.562914   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:34.562921   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:34.562981   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:34.599387   71929 cri.go:89] found id: ""
	I0717 01:57:34.599409   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.599417   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:34.599423   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:34.599479   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:34.636087   71929 cri.go:89] found id: ""
	I0717 01:57:34.636118   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.636126   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:34.636132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:34.636194   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:34.673168   71929 cri.go:89] found id: ""
	I0717 01:57:34.673196   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.673206   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:34.673214   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:34.673226   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:34.712833   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:34.712864   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:34.765926   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:34.765959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:34.780024   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:34.780049   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:34.863080   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:34.863106   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:34.863122   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:33.803621   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.306114   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:35.933050   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.432520   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.280143   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.779168   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:37.446644   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:37.463015   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:37.463090   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:37.499563   71929 cri.go:89] found id: ""
	I0717 01:57:37.499592   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.499601   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:37.499607   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:37.499663   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:37.538516   71929 cri.go:89] found id: ""
	I0717 01:57:37.538543   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.538572   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:37.538579   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:37.538638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:37.577032   71929 cri.go:89] found id: ""
	I0717 01:57:37.577061   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.577068   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:37.577074   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:37.577129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:37.613534   71929 cri.go:89] found id: ""
	I0717 01:57:37.613563   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.613574   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:37.613582   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:37.613646   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:37.651346   71929 cri.go:89] found id: ""
	I0717 01:57:37.651370   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.651381   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:37.651389   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:37.651451   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:37.685949   71929 cri.go:89] found id: ""
	I0717 01:57:37.685989   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.686001   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:37.686008   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:37.686068   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:37.721706   71929 cri.go:89] found id: ""
	I0717 01:57:37.721744   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.721752   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:37.721759   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:37.721812   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:37.758948   71929 cri.go:89] found id: ""
	I0717 01:57:37.758976   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.758985   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:37.758994   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:37.759005   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:37.835305   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:37.835334   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:37.835349   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:37.916627   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:37.916660   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:37.956819   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:37.956851   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.007596   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:38.007641   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.522573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:40.536850   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:40.536924   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:40.576172   71929 cri.go:89] found id: ""
	I0717 01:57:40.576200   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.576211   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:40.576218   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:40.576277   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:40.611926   71929 cri.go:89] found id: ""
	I0717 01:57:40.611958   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.611969   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:40.611976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:40.612039   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:40.647225   71929 cri.go:89] found id: ""
	I0717 01:57:40.647251   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.647259   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:40.647265   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:40.647315   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:40.683871   71929 cri.go:89] found id: ""
	I0717 01:57:40.683902   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.683917   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:40.683925   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:40.683999   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:40.720941   71929 cri.go:89] found id: ""
	I0717 01:57:40.720971   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.720982   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:40.720989   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:40.721053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:40.756695   71929 cri.go:89] found id: ""
	I0717 01:57:40.756728   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.756739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:40.756746   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:40.756801   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:40.794181   71929 cri.go:89] found id: ""
	I0717 01:57:40.794214   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.794221   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:40.794226   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:40.794281   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:40.830361   71929 cri.go:89] found id: ""
	I0717 01:57:40.830396   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.830407   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:40.830417   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:40.830436   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.844827   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:40.844849   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:40.913003   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:40.913021   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:40.913035   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:40.996314   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:40.996348   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:41.041120   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:41.041151   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.801850   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.802727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:42.802814   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.934130   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.432799   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.780350   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.279971   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.593226   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:43.606395   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:43.606461   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:43.646260   71929 cri.go:89] found id: ""
	I0717 01:57:43.646290   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.646302   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:43.646310   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:43.646368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:43.681148   71929 cri.go:89] found id: ""
	I0717 01:57:43.681174   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.681182   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:43.681189   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:43.681250   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:43.716568   71929 cri.go:89] found id: ""
	I0717 01:57:43.716595   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.716606   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:43.716613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:43.716675   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:43.750507   71929 cri.go:89] found id: ""
	I0717 01:57:43.750536   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.750558   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:43.750566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:43.750627   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:43.787207   71929 cri.go:89] found id: ""
	I0717 01:57:43.787234   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.787244   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:43.787251   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:43.787311   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:43.822997   71929 cri.go:89] found id: ""
	I0717 01:57:43.823034   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.823045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:43.823052   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:43.823118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:43.860605   71929 cri.go:89] found id: ""
	I0717 01:57:43.860632   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.860640   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:43.860646   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:43.860702   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:43.897419   71929 cri.go:89] found id: ""
	I0717 01:57:43.897451   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.897463   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:43.897473   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:43.897492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:43.956361   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:43.956393   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:43.971077   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:43.971104   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:44.045234   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:44.045258   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:44.045275   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:44.122508   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:44.122544   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:46.660516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:46.675555   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:46.675651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:46.709264   71929 cri.go:89] found id: ""
	I0717 01:57:46.709291   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.709300   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:46.709306   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:46.709362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:46.744865   71929 cri.go:89] found id: ""
	I0717 01:57:46.744898   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.744908   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:46.744915   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:46.744971   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:46.785837   71929 cri.go:89] found id: ""
	I0717 01:57:46.785860   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.785870   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:46.785878   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:46.785932   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:46.828801   71929 cri.go:89] found id: ""
	I0717 01:57:46.828832   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.828842   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:46.828849   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:46.828907   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:46.863122   71929 cri.go:89] found id: ""
	I0717 01:57:46.863151   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.863162   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:46.863175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:46.863232   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:46.900705   71929 cri.go:89] found id: ""
	I0717 01:57:46.900731   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.900739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:46.900744   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:46.900790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:46.935774   71929 cri.go:89] found id: ""
	I0717 01:57:46.935816   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.935829   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:46.935840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:46.935895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:46.969274   71929 cri.go:89] found id: ""
	I0717 01:57:46.969304   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.969315   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:46.969325   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:46.969339   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:47.040318   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:47.040343   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:47.040358   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:47.119920   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:47.119954   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:47.168818   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:47.168847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:47.221983   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:47.222034   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:45.303812   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.304051   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.433020   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.932755   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.936075   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.780328   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.781850   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.736564   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:49.749966   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:49.750025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:49.788294   71929 cri.go:89] found id: ""
	I0717 01:57:49.788321   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.788332   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:49.788339   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:49.788396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:49.826406   71929 cri.go:89] found id: ""
	I0717 01:57:49.826431   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.826440   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:49.826445   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:49.826491   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:49.864978   71929 cri.go:89] found id: ""
	I0717 01:57:49.865005   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.865015   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:49.865020   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:49.865074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:49.901238   71929 cri.go:89] found id: ""
	I0717 01:57:49.901270   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.901281   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:49.901300   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:49.901366   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:49.937035   71929 cri.go:89] found id: ""
	I0717 01:57:49.937058   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.937065   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:49.937070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:49.937207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:49.977793   71929 cri.go:89] found id: ""
	I0717 01:57:49.977816   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.977823   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:49.977828   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:49.977873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:50.012915   71929 cri.go:89] found id: ""
	I0717 01:57:50.012942   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.012952   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:50.012959   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:50.013025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:50.049085   71929 cri.go:89] found id: ""
	I0717 01:57:50.049115   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.049127   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:50.049138   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:50.049156   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:50.087521   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:50.087549   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:50.140934   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:50.140978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:50.156001   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:50.156033   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:50.231780   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:50.231811   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:50.231835   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:49.802916   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:51.803036   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.432307   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.432384   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.278585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.279641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.810064   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:52.823442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:52.823508   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:52.860753   71929 cri.go:89] found id: ""
	I0717 01:57:52.860778   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.860789   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:52.860797   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:52.860852   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:52.896264   71929 cri.go:89] found id: ""
	I0717 01:57:52.896289   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.896297   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:52.896303   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:52.896349   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:52.932613   71929 cri.go:89] found id: ""
	I0717 01:57:52.932640   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.932649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:52.932657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:52.932722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:52.969691   71929 cri.go:89] found id: ""
	I0717 01:57:52.969720   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.969728   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:52.969734   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:52.969788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:53.007039   71929 cri.go:89] found id: ""
	I0717 01:57:53.007067   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.007075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:53.007081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:53.007135   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:53.047736   71929 cri.go:89] found id: ""
	I0717 01:57:53.047762   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.047772   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:53.047778   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:53.047838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:53.083192   71929 cri.go:89] found id: ""
	I0717 01:57:53.083216   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.083225   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:53.083230   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:53.083276   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:53.118509   71929 cri.go:89] found id: ""
	I0717 01:57:53.118536   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.118545   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:53.118564   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:53.118589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:53.203003   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:53.203039   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:53.244602   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:53.244627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:53.295180   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:53.295216   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:53.310777   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:53.310805   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:53.389412   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:55.890450   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:55.903768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:55.903843   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:55.944148   71929 cri.go:89] found id: ""
	I0717 01:57:55.944171   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.944179   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:55.944185   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:55.944231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:55.979945   71929 cri.go:89] found id: ""
	I0717 01:57:55.979970   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.979980   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:55.979987   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:55.980045   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:56.019057   71929 cri.go:89] found id: ""
	I0717 01:57:56.019089   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.019100   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:56.019107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:56.019162   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:56.054343   71929 cri.go:89] found id: ""
	I0717 01:57:56.054369   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.054378   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:56.054383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:56.054434   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:56.091150   71929 cri.go:89] found id: ""
	I0717 01:57:56.091179   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.091189   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:56.091197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:56.091256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:56.127502   71929 cri.go:89] found id: ""
	I0717 01:57:56.127528   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.127538   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:56.127547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:56.127602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:56.167935   71929 cri.go:89] found id: ""
	I0717 01:57:56.167961   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.167972   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:56.167979   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:56.168048   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:56.209501   71929 cri.go:89] found id: ""
	I0717 01:57:56.209527   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.209537   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:56.209547   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:56.209561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:56.257989   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:56.258023   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:56.272491   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:56.272519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:56.361622   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:56.361653   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:56.361668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:56.442953   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:56.442992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:54.302376   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.303297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.933123   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.933242   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.280399   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.779285   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.983914   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:58.997215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:58.997292   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:59.032937   71929 cri.go:89] found id: ""
	I0717 01:57:59.032964   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.032980   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:59.032996   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:59.033057   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:59.067790   71929 cri.go:89] found id: ""
	I0717 01:57:59.067811   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.067819   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:59.067825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:59.067881   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:59.107659   71929 cri.go:89] found id: ""
	I0717 01:57:59.107689   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.107699   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:59.107705   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:59.107754   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:59.150134   71929 cri.go:89] found id: ""
	I0717 01:57:59.150158   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.150168   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:59.150175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:59.150235   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:59.192351   71929 cri.go:89] found id: ""
	I0717 01:57:59.192381   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.192391   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:59.192398   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:59.192460   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:59.228177   71929 cri.go:89] found id: ""
	I0717 01:57:59.228202   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.228209   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:59.228215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:59.228261   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:59.267016   71929 cri.go:89] found id: ""
	I0717 01:57:59.267043   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.267052   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:59.267058   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:59.267109   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:59.302235   71929 cri.go:89] found id: ""
	I0717 01:57:59.302257   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.302263   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:59.302273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:59.302285   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:59.368453   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:59.368492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:59.383375   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:59.383399   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:59.454946   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:59.454975   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:59.454992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:59.539576   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:59.539609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:02.085516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:02.099848   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:02.099909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:02.136835   71929 cri.go:89] found id: ""
	I0717 01:58:02.136859   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.136867   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:02.136872   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:02.136928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:02.175304   71929 cri.go:89] found id: ""
	I0717 01:58:02.175331   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.175338   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:02.175344   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:02.175389   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:02.210922   71929 cri.go:89] found id: ""
	I0717 01:58:02.210947   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.210955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:02.210961   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:02.211018   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:02.246952   71929 cri.go:89] found id: ""
	I0717 01:58:02.246983   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.246992   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:02.246999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:02.247053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:02.284857   71929 cri.go:89] found id: ""
	I0717 01:58:02.284883   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.284892   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:02.284897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:02.284944   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:02.322941   71929 cri.go:89] found id: ""
	I0717 01:58:02.322978   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.322999   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:02.323007   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:02.323065   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:02.357904   71929 cri.go:89] found id: ""
	I0717 01:58:02.357932   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.357943   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:02.357950   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:02.358012   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:02.392291   71929 cri.go:89] found id: ""
	I0717 01:58:02.392315   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.392322   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:02.392331   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:02.392346   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:58.802622   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.303663   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.433212   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:03.433962   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:00.779479   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.779619   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.279590   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.447670   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:02.447704   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:02.462259   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:02.462284   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:02.534304   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:02.534332   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:02.534347   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:02.612757   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:02.612799   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.153573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:05.166702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:05.166775   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:05.205213   71929 cri.go:89] found id: ""
	I0717 01:58:05.205238   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.205247   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:05.205252   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:05.205305   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:05.242021   71929 cri.go:89] found id: ""
	I0717 01:58:05.242048   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.242057   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:05.242063   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:05.242118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:05.281862   71929 cri.go:89] found id: ""
	I0717 01:58:05.281889   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.281900   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:05.281908   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:05.281967   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:05.318125   71929 cri.go:89] found id: ""
	I0717 01:58:05.318157   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.318169   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:05.318177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:05.318244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:05.352470   71929 cri.go:89] found id: ""
	I0717 01:58:05.352504   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.352516   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:05.352524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:05.352595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:05.386692   71929 cri.go:89] found id: ""
	I0717 01:58:05.386722   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.386733   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:05.386741   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:05.386803   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:05.426676   71929 cri.go:89] found id: ""
	I0717 01:58:05.426731   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.426744   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:05.426751   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:05.426811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:05.467974   71929 cri.go:89] found id: ""
	I0717 01:58:05.468000   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.468010   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:05.468020   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:05.468036   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.506769   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:05.506797   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:05.561745   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:05.561782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:05.576743   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:05.576775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:05.652856   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:05.652887   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:05.652903   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:03.304109   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.803632   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.434411   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.931796   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.932902   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.779196   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.779591   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:08.244185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:08.257343   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:08.257420   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:08.297136   71929 cri.go:89] found id: ""
	I0717 01:58:08.297163   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.297174   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:08.297181   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:08.297237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:08.336099   71929 cri.go:89] found id: ""
	I0717 01:58:08.336121   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.336129   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:08.336135   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:08.336185   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:08.369668   71929 cri.go:89] found id: ""
	I0717 01:58:08.369690   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.369698   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:08.369706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:08.369756   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:08.405140   71929 cri.go:89] found id: ""
	I0717 01:58:08.405171   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.405179   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:08.405186   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:08.405249   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:08.446296   71929 cri.go:89] found id: ""
	I0717 01:58:08.446319   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.446326   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:08.446331   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:08.446377   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:08.483004   71929 cri.go:89] found id: ""
	I0717 01:58:08.483042   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.483062   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:08.483070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:08.483139   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:08.520668   71929 cri.go:89] found id: ""
	I0717 01:58:08.520699   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.520710   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:08.520717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:08.520776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:08.554711   71929 cri.go:89] found id: ""
	I0717 01:58:08.554734   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.554744   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:08.554752   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:08.554763   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:08.606972   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:08.607004   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:08.621102   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:08.621134   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:08.690424   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:08.690443   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:08.690454   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:08.775151   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:08.775193   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:11.318471   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:11.331875   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:11.331954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:11.375766   71929 cri.go:89] found id: ""
	I0717 01:58:11.375787   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.375795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:11.375801   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:11.375863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:11.417043   71929 cri.go:89] found id: ""
	I0717 01:58:11.417080   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.417103   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:11.417111   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:11.417169   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:11.459462   71929 cri.go:89] found id: ""
	I0717 01:58:11.459487   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.459495   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:11.459500   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:11.459551   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:11.516500   71929 cri.go:89] found id: ""
	I0717 01:58:11.516525   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.516533   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:11.516539   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:11.516590   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:11.573916   71929 cri.go:89] found id: ""
	I0717 01:58:11.573961   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.575159   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:11.575201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:11.575275   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:11.619446   71929 cri.go:89] found id: ""
	I0717 01:58:11.619477   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.619489   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:11.619497   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:11.619558   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:11.654766   71929 cri.go:89] found id: ""
	I0717 01:58:11.654793   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.654802   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:11.654807   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:11.654859   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:11.690306   71929 cri.go:89] found id: ""
	I0717 01:58:11.690335   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.690346   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:11.690354   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:11.690366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:11.744470   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:11.744516   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:11.758824   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:11.758856   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:11.841028   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:11.841058   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:11.841076   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:11.923299   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:11.923351   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:08.303010   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:10.303678   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.803090   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:11.933148   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.433109   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.280292   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.281580   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.466666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:14.479676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:14.479740   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:14.517890   71929 cri.go:89] found id: ""
	I0717 01:58:14.517919   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.517931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:14.517938   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:14.517998   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:14.552891   71929 cri.go:89] found id: ""
	I0717 01:58:14.552918   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.552926   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:14.552931   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:14.552992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:14.593571   71929 cri.go:89] found id: ""
	I0717 01:58:14.593596   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.593604   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:14.593609   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:14.593662   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:14.628869   71929 cri.go:89] found id: ""
	I0717 01:58:14.628897   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.628907   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:14.628913   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:14.628972   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:14.663558   71929 cri.go:89] found id: ""
	I0717 01:58:14.663586   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.663593   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:14.663599   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:14.663644   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:14.700788   71929 cri.go:89] found id: ""
	I0717 01:58:14.700824   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.700834   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:14.700843   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:14.700903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:14.737975   71929 cri.go:89] found id: ""
	I0717 01:58:14.738014   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.738025   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:14.738032   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:14.738091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:14.775419   71929 cri.go:89] found id: ""
	I0717 01:58:14.775443   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.775453   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:14.775465   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:14.775479   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:14.817635   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:14.817661   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:14.870667   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:14.870705   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:14.885208   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:14.885235   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:14.962286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:14.962318   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:14.962334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:14.803624   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.303944   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.434108   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.934577   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.779538   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.780694   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.537546   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:17.550258   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:17.550322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:17.586251   71929 cri.go:89] found id: ""
	I0717 01:58:17.586278   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.586286   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:17.586292   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:17.586348   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:17.620903   71929 cri.go:89] found id: ""
	I0717 01:58:17.620927   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.620935   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:17.620941   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:17.620992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:17.659292   71929 cri.go:89] found id: ""
	I0717 01:58:17.659319   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.659328   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:17.659334   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:17.659384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:17.695603   71929 cri.go:89] found id: ""
	I0717 01:58:17.695632   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.695642   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:17.695650   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:17.695711   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:17.731943   71929 cri.go:89] found id: ""
	I0717 01:58:17.731970   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.731978   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:17.731984   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:17.732041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:17.767257   71929 cri.go:89] found id: ""
	I0717 01:58:17.767284   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.767293   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:17.767299   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:17.767357   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:17.802455   71929 cri.go:89] found id: ""
	I0717 01:58:17.802495   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.802508   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:17.802516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:17.802602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:17.839321   71929 cri.go:89] found id: ""
	I0717 01:58:17.839351   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.839362   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:17.839374   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:17.839391   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:17.912269   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:17.912295   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:17.912311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:17.990005   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:17.990038   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:18.029933   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:18.029960   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:18.081941   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:18.081977   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:20.597325   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:20.611835   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:20.611901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:20.647899   71929 cri.go:89] found id: ""
	I0717 01:58:20.647922   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.647931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:20.647936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:20.647984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:20.683783   71929 cri.go:89] found id: ""
	I0717 01:58:20.683816   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.683827   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:20.683834   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:20.683892   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:20.721803   71929 cri.go:89] found id: ""
	I0717 01:58:20.721833   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.721844   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:20.721851   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:20.721910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:20.756148   71929 cri.go:89] found id: ""
	I0717 01:58:20.756177   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.756189   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:20.756196   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:20.756259   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:20.795976   71929 cri.go:89] found id: ""
	I0717 01:58:20.796014   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.796028   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:20.796036   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:20.796095   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:20.833775   71929 cri.go:89] found id: ""
	I0717 01:58:20.833805   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.833816   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:20.833824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:20.833891   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:20.869138   71929 cri.go:89] found id: ""
	I0717 01:58:20.869163   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.869173   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:20.869180   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:20.869237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:20.904865   71929 cri.go:89] found id: ""
	I0717 01:58:20.904893   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.904901   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:20.904910   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:20.904920   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:20.947268   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:20.947294   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:20.998541   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:20.998582   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:21.013797   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:21.013828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:21.085101   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:21.085127   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:21.085141   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:19.804949   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:22.304273   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.436176   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.933548   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.279177   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.279599   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:25.279899   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.667361   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:23.681768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:23.681828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:23.717721   71929 cri.go:89] found id: ""
	I0717 01:58:23.717748   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.717757   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:23.717763   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:23.717827   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:23.752699   71929 cri.go:89] found id: ""
	I0717 01:58:23.752728   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.752738   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:23.752745   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:23.752809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:23.790914   71929 cri.go:89] found id: ""
	I0717 01:58:23.790944   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.790955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:23.790962   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:23.791021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:23.827253   71929 cri.go:89] found id: ""
	I0717 01:58:23.827276   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.827285   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:23.827338   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:23.827392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:23.864466   71929 cri.go:89] found id: ""
	I0717 01:58:23.864510   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.864520   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:23.864527   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:23.864577   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:23.900734   71929 cri.go:89] found id: ""
	I0717 01:58:23.900775   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.900786   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:23.900794   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:23.900855   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:23.937212   71929 cri.go:89] found id: ""
	I0717 01:58:23.937236   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.937243   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:23.937249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:23.937304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:23.973730   71929 cri.go:89] found id: ""
	I0717 01:58:23.973755   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.973764   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:23.973774   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:23.973786   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:24.026122   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:24.026163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:24.040755   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:24.040784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:24.112224   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:24.112254   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:24.112277   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:24.195247   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:24.195281   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:26.738042   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:26.751545   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:26.751602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:26.786778   71929 cri.go:89] found id: ""
	I0717 01:58:26.786813   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.786824   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:26.786831   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:26.786889   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:26.828776   71929 cri.go:89] found id: ""
	I0717 01:58:26.828806   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.828818   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:26.828825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:26.828887   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:26.868439   71929 cri.go:89] found id: ""
	I0717 01:58:26.868468   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.868479   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:26.868486   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:26.868546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:26.900249   71929 cri.go:89] found id: ""
	I0717 01:58:26.900282   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.900292   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:26.900297   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:26.900344   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:26.933763   71929 cri.go:89] found id: ""
	I0717 01:58:26.933798   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.933808   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:26.933816   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:26.933882   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:26.968681   71929 cri.go:89] found id: ""
	I0717 01:58:26.968712   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.968722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:26.968729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:26.968788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:27.002081   71929 cri.go:89] found id: ""
	I0717 01:58:27.002113   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.002128   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:27.002135   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:27.002196   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:27.035138   71929 cri.go:89] found id: ""
	I0717 01:58:27.035161   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.035170   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:27.035177   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:27.035189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:27.091207   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:27.091244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:27.105765   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:27.105793   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:27.175533   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:27.175563   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:27.175580   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:27.260903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:27.260951   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:24.802002   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.803330   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.432259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:28.433226   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:27.280206   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.781139   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.802451   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:29.816503   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:29.816573   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:29.854887   71929 cri.go:89] found id: ""
	I0717 01:58:29.854921   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.854931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:29.854936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:29.854983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:29.887529   71929 cri.go:89] found id: ""
	I0717 01:58:29.887559   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.887570   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:29.887577   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:29.887638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:29.924995   71929 cri.go:89] found id: ""
	I0717 01:58:29.925020   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.925028   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:29.925034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:29.925091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:29.960064   71929 cri.go:89] found id: ""
	I0717 01:58:29.960092   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.960104   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:29.960111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:29.960178   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:29.995408   71929 cri.go:89] found id: ""
	I0717 01:58:29.995431   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.995438   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:29.995443   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:29.995494   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:30.028219   71929 cri.go:89] found id: ""
	I0717 01:58:30.028247   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.028254   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:30.028260   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:30.028309   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:30.062529   71929 cri.go:89] found id: ""
	I0717 01:58:30.062576   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.062589   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:30.062597   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:30.062664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:30.095854   71929 cri.go:89] found id: ""
	I0717 01:58:30.095882   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.095893   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:30.095904   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:30.095919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:30.148083   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:30.148114   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:30.161861   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:30.161892   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:30.236474   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:30.236503   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:30.236519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:30.319691   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:30.319720   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:28.804656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:31.302637   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:30.932659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.934225   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.279141   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:34.279312   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.867821   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:32.881480   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:32.881541   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:32.918289   71929 cri.go:89] found id: ""
	I0717 01:58:32.918316   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.918327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:32.918335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:32.918396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:32.955383   71929 cri.go:89] found id: ""
	I0717 01:58:32.955417   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.955426   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:32.955433   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:32.955498   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:32.990432   71929 cri.go:89] found id: ""
	I0717 01:58:32.990460   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.990467   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:32.990472   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:32.990531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:33.034653   71929 cri.go:89] found id: ""
	I0717 01:58:33.034685   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.034697   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:33.034703   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:33.034763   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:33.077875   71929 cri.go:89] found id: ""
	I0717 01:58:33.077911   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.077919   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:33.077926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:33.077988   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:33.114800   71929 cri.go:89] found id: ""
	I0717 01:58:33.114840   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.114852   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:33.114864   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:33.114946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:33.151095   71929 cri.go:89] found id: ""
	I0717 01:58:33.151229   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.151242   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:33.151249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:33.151324   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:33.190100   71929 cri.go:89] found id: ""
	I0717 01:58:33.190128   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.190138   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:33.190149   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:33.190163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:33.271195   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:33.271231   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:33.317539   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:33.317569   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.370188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:33.370224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:33.385016   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:33.385045   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:33.460017   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:35.960499   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:35.974504   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:35.974583   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:36.008652   71929 cri.go:89] found id: ""
	I0717 01:58:36.008696   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.008704   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:36.008710   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:36.008770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:36.044068   71929 cri.go:89] found id: ""
	I0717 01:58:36.044097   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.044106   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:36.044113   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:36.044174   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:36.083572   71929 cri.go:89] found id: ""
	I0717 01:58:36.083602   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.083610   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:36.083616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:36.083682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:36.116716   71929 cri.go:89] found id: ""
	I0717 01:58:36.116744   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.116753   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:36.116761   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:36.116820   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:36.156042   71929 cri.go:89] found id: ""
	I0717 01:58:36.156069   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.156080   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:36.156087   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:36.156148   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:36.192005   71929 cri.go:89] found id: ""
	I0717 01:58:36.192033   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.192045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:36.192055   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:36.192116   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:36.228720   71929 cri.go:89] found id: ""
	I0717 01:58:36.228751   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.228763   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:36.228769   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:36.228817   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:36.263835   71929 cri.go:89] found id: ""
	I0717 01:58:36.263862   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.263872   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:36.263882   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:36.263897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:36.278545   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:36.278609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:36.361182   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:36.361208   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:36.361225   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:36.447797   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:36.447832   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:36.492167   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:36.492196   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.304750   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.803867   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.432659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:37.433360   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.433481   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:36.282525   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:38.779592   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.045613   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:39.058615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:39.058688   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:39.094625   71929 cri.go:89] found id: ""
	I0717 01:58:39.094672   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.094684   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:39.094692   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:39.094755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:39.132856   71929 cri.go:89] found id: ""
	I0717 01:58:39.132887   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.132898   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:39.132905   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:39.132966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:39.171017   71929 cri.go:89] found id: ""
	I0717 01:58:39.171037   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.171044   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:39.171051   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:39.171112   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:39.210146   71929 cri.go:89] found id: ""
	I0717 01:58:39.210176   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.210186   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:39.210193   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:39.210269   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:39.244307   71929 cri.go:89] found id: ""
	I0717 01:58:39.244332   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.244342   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:39.244349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:39.244411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:39.279649   71929 cri.go:89] found id: ""
	I0717 01:58:39.279675   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.279682   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:39.279688   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:39.279755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:39.317699   71929 cri.go:89] found id: ""
	I0717 01:58:39.317726   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.317735   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:39.317742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:39.317789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:39.352319   71929 cri.go:89] found id: ""
	I0717 01:58:39.352351   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.352365   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:39.352377   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:39.352392   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:39.404153   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:39.404188   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:39.419796   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:39.419828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:39.495463   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:39.495485   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:39.495499   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:39.576742   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:39.576795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.132481   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:42.145588   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:42.145658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:42.181231   71929 cri.go:89] found id: ""
	I0717 01:58:42.181257   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.181265   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:42.181270   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:42.181321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:42.216876   71929 cri.go:89] found id: ""
	I0717 01:58:42.216905   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.216917   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:42.216923   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:42.216984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:42.256918   71929 cri.go:89] found id: ""
	I0717 01:58:42.256948   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.256959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:42.256967   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:42.257022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:42.291930   71929 cri.go:89] found id: ""
	I0717 01:58:42.291957   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.291964   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:42.291975   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:42.292035   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:42.329927   71929 cri.go:89] found id: ""
	I0717 01:58:42.329954   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.329964   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:42.329970   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:42.330014   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:42.364041   71929 cri.go:89] found id: ""
	I0717 01:58:42.364072   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.364085   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:42.364093   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:42.364150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:38.302060   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.302711   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.303560   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:41.437100   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.932845   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.780109   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.280118   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.400751   71929 cri.go:89] found id: ""
	I0717 01:58:42.400775   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.400784   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:42.400790   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:42.400840   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:42.438200   71929 cri.go:89] found id: ""
	I0717 01:58:42.438228   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.438240   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:42.438251   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:42.438265   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:42.455268   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:42.455303   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:42.537344   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:42.537368   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:42.537381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:42.618487   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:42.618522   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.661273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:42.661299   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:45.212631   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:45.226247   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:45.226330   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:45.263067   71929 cri.go:89] found id: ""
	I0717 01:58:45.263098   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.263110   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:45.263117   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:45.263177   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:45.299025   71929 cri.go:89] found id: ""
	I0717 01:58:45.299056   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.299067   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:45.299074   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:45.299137   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:45.346828   71929 cri.go:89] found id: ""
	I0717 01:58:45.346858   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.346868   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:45.346876   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:45.346938   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:45.390879   71929 cri.go:89] found id: ""
	I0717 01:58:45.390905   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.390913   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:45.390918   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:45.390966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:45.426794   71929 cri.go:89] found id: ""
	I0717 01:58:45.426823   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.426834   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:45.426841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:45.426902   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:45.463834   71929 cri.go:89] found id: ""
	I0717 01:58:45.463863   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.463873   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:45.463880   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:45.463942   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:45.500660   71929 cri.go:89] found id: ""
	I0717 01:58:45.500689   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.500701   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:45.500708   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:45.500766   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:45.537332   71929 cri.go:89] found id: ""
	I0717 01:58:45.537356   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.537364   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:45.537373   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:45.537388   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:45.551194   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:45.551222   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:45.623863   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:45.623892   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:45.623906   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:45.699740   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:45.699782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:45.739580   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:45.739613   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:44.803138   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:47.302471   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:46.434311   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.933004   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:45.779778   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.279595   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.300789   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:48.315608   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:48.315667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:48.353050   71929 cri.go:89] found id: ""
	I0717 01:58:48.353076   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.353084   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:48.353089   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:48.353133   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:48.394789   71929 cri.go:89] found id: ""
	I0717 01:58:48.394817   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.394829   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:48.394837   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:48.394900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:48.433430   71929 cri.go:89] found id: ""
	I0717 01:58:48.433457   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.433468   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:48.433475   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:48.433530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:48.467215   71929 cri.go:89] found id: ""
	I0717 01:58:48.467243   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.467254   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:48.467262   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:48.467318   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:48.501087   71929 cri.go:89] found id: ""
	I0717 01:58:48.501120   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.501131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:48.501138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:48.501204   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:48.538648   71929 cri.go:89] found id: ""
	I0717 01:58:48.538683   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.538696   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:48.538706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:48.538762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:48.573006   71929 cri.go:89] found id: ""
	I0717 01:58:48.573030   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.573040   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:48.573047   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:48.573106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:48.608779   71929 cri.go:89] found id: ""
	I0717 01:58:48.608803   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.608813   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:48.608824   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:48.608837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:48.659250   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:48.659290   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:48.673418   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:48.673449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:48.748175   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:48.748196   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:48.748207   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:48.824238   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:48.824274   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:51.367155   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:51.382458   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:51.382527   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:51.424005   71929 cri.go:89] found id: ""
	I0717 01:58:51.424040   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.424051   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:51.424059   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:51.424117   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:51.463318   71929 cri.go:89] found id: ""
	I0717 01:58:51.463348   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.463357   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:51.463363   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:51.463414   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:51.502261   71929 cri.go:89] found id: ""
	I0717 01:58:51.502290   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.502301   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:51.502309   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:51.502362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:51.536277   71929 cri.go:89] found id: ""
	I0717 01:58:51.536308   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.536319   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:51.536327   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:51.536392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:51.580598   71929 cri.go:89] found id: ""
	I0717 01:58:51.580629   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.580640   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:51.580648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:51.580726   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:51.618666   71929 cri.go:89] found id: ""
	I0717 01:58:51.618690   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.618697   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:51.618702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:51.618747   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:51.654742   71929 cri.go:89] found id: ""
	I0717 01:58:51.654777   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.654790   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:51.654799   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:51.654863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:51.698006   71929 cri.go:89] found id: ""
	I0717 01:58:51.698034   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.698043   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:51.698051   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:51.698062   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:51.754812   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:51.754852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:51.771887   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:51.771919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:51.859627   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:51.859657   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:51.859675   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:51.946633   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:51.946673   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:49.302540   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.803884   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.433981   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.933306   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:50.781428   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.279780   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:54.494188   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:54.509111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:54.509190   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:54.546424   71929 cri.go:89] found id: ""
	I0717 01:58:54.546454   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.546464   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:54.546471   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:54.546532   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:54.586811   71929 cri.go:89] found id: ""
	I0717 01:58:54.586841   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.586853   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:54.586860   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:54.586918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:54.627350   71929 cri.go:89] found id: ""
	I0717 01:58:54.627375   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.627383   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:54.627388   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:54.627438   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:54.665901   71929 cri.go:89] found id: ""
	I0717 01:58:54.665941   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.665954   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:54.665974   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:54.666041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:54.702921   71929 cri.go:89] found id: ""
	I0717 01:58:54.702948   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.702958   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:54.702965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:54.703027   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:54.737378   71929 cri.go:89] found id: ""
	I0717 01:58:54.737406   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.737414   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:54.737421   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:54.737469   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:54.771924   71929 cri.go:89] found id: ""
	I0717 01:58:54.771954   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.771964   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:54.771971   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:54.772055   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:54.812939   71929 cri.go:89] found id: ""
	I0717 01:58:54.812972   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.812983   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:54.812995   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:54.813010   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:54.862979   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:54.863013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:54.877467   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:54.877504   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:54.953924   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:54.953950   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:54.953963   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:55.032019   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:55.032052   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:54.302727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:56.311656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.933968   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:58.432611   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.778263   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.781311   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.278937   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.573130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:57.591689   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:57.591762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:57.626444   71929 cri.go:89] found id: ""
	I0717 01:58:57.626469   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.626479   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:57.626486   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:57.626570   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:57.661280   71929 cri.go:89] found id: ""
	I0717 01:58:57.661305   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.661314   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:57.661321   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:57.661376   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:57.695678   71929 cri.go:89] found id: ""
	I0717 01:58:57.695703   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.695711   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:57.695717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:57.695762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:57.729705   71929 cri.go:89] found id: ""
	I0717 01:58:57.729734   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.729742   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:57.729748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:57.729804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:57.763338   71929 cri.go:89] found id: ""
	I0717 01:58:57.763365   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.763373   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:57.763387   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:57.763433   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:57.800576   71929 cri.go:89] found id: ""
	I0717 01:58:57.800600   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.800608   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:57.800615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:57.800701   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:57.842401   71929 cri.go:89] found id: ""
	I0717 01:58:57.842428   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.842439   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:57.842446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:57.842503   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:57.880355   71929 cri.go:89] found id: ""
	I0717 01:58:57.880379   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.880387   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:57.880395   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:57.880412   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:57.938215   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:57.938252   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:57.952835   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:57.952876   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:58.027203   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:58.027231   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:58.027246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:58.108442   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:58.108483   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:00.648580   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:00.662596   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:00.662667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:00.696315   71929 cri.go:89] found id: ""
	I0717 01:59:00.696342   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.696351   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:00.696356   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:00.696411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:00.732117   71929 cri.go:89] found id: ""
	I0717 01:59:00.732147   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.732158   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:00.732164   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:00.732212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:00.768747   71929 cri.go:89] found id: ""
	I0717 01:59:00.768779   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.768790   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:00.768797   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:00.768856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:00.807557   71929 cri.go:89] found id: ""
	I0717 01:59:00.807585   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.807592   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:00.807598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:00.807651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:00.844127   71929 cri.go:89] found id: ""
	I0717 01:59:00.844152   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.844161   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:00.844166   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:00.844222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:00.879565   71929 cri.go:89] found id: ""
	I0717 01:59:00.879590   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.879597   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:00.879613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:00.879684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:00.917352   71929 cri.go:89] found id: ""
	I0717 01:59:00.917379   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.917387   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:00.917393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:00.917440   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:00.952603   71929 cri.go:89] found id: ""
	I0717 01:59:00.952630   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.952637   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:00.952647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:00.952688   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:01.007203   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:01.007242   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:01.021476   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:01.021512   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:01.102283   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:01.102306   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:01.102320   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:01.175736   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:01.175771   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:58.803034   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.803718   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.932781   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.433188   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:02.281269   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:04.779257   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.717612   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:03.732446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:03.732511   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:03.767485   71929 cri.go:89] found id: ""
	I0717 01:59:03.767519   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.767533   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:03.767542   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:03.767607   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:03.803961   71929 cri.go:89] found id: ""
	I0717 01:59:03.803989   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.804000   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:03.804007   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:03.804074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:03.842734   71929 cri.go:89] found id: ""
	I0717 01:59:03.842768   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.842780   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:03.842788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:03.842915   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:03.883571   71929 cri.go:89] found id: ""
	I0717 01:59:03.883598   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.883608   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:03.883616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:03.883682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:03.922037   71929 cri.go:89] found id: ""
	I0717 01:59:03.922065   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.922076   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:03.922084   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:03.922143   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:03.961135   71929 cri.go:89] found id: ""
	I0717 01:59:03.961165   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.961176   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:03.961183   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:03.961244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:03.995542   71929 cri.go:89] found id: ""
	I0717 01:59:03.995570   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.995580   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:03.995589   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:03.995647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:04.030142   71929 cri.go:89] found id: ""
	I0717 01:59:04.030170   71929 logs.go:276] 0 containers: []
	W0717 01:59:04.030178   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:04.030187   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:04.030198   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:04.110329   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:04.110366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:04.152194   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:04.152224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:04.204012   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:04.204048   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:04.218261   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:04.218291   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:04.290786   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:06.791166   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:06.806662   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:06.806722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:06.841447   71929 cri.go:89] found id: ""
	I0717 01:59:06.841476   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.841486   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:06.841494   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:06.841554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:06.879920   71929 cri.go:89] found id: ""
	I0717 01:59:06.879956   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.879971   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:06.879976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:06.880033   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:06.914436   71929 cri.go:89] found id: ""
	I0717 01:59:06.914465   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.914476   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:06.914484   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:06.914566   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:06.952098   71929 cri.go:89] found id: ""
	I0717 01:59:06.952127   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.952135   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:06.952141   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:06.952187   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:06.988054   71929 cri.go:89] found id: ""
	I0717 01:59:06.988085   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.988096   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:06.988103   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:06.988168   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:07.026633   71929 cri.go:89] found id: ""
	I0717 01:59:07.026658   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.026670   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:07.026676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:07.026732   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:07.064433   71929 cri.go:89] found id: ""
	I0717 01:59:07.064454   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.064463   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:07.064468   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:07.064545   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:07.108352   71929 cri.go:89] found id: ""
	I0717 01:59:07.108385   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.108396   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:07.108410   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:07.108428   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:07.163554   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:07.163593   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:07.177221   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:07.177249   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:07.249712   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:07.249735   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:07.249748   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:07.333011   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:07.333044   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:03.303048   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.304001   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.314317   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.932370   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.933031   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.933728   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:06.780342   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.279683   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.873187   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:09.887579   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:09.887658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:09.923675   71929 cri.go:89] found id: ""
	I0717 01:59:09.923706   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.923716   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:09.923724   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:09.923789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:09.961248   71929 cri.go:89] found id: ""
	I0717 01:59:09.961278   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.961288   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:09.961296   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:09.961354   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:10.000069   71929 cri.go:89] found id: ""
	I0717 01:59:10.000094   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.000101   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:10.000107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:10.000157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:10.036784   71929 cri.go:89] found id: ""
	I0717 01:59:10.036808   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.036815   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:10.036820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:10.036869   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:10.072746   71929 cri.go:89] found id: ""
	I0717 01:59:10.072778   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.072789   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:10.072796   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:10.072856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:10.109520   71929 cri.go:89] found id: ""
	I0717 01:59:10.109544   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.109552   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:10.109557   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:10.109608   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:10.142521   71929 cri.go:89] found id: ""
	I0717 01:59:10.142565   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.142576   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:10.142584   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:10.142647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:10.175772   71929 cri.go:89] found id: ""
	I0717 01:59:10.175800   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.175812   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:10.175823   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:10.175837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:10.213534   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:10.213561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:10.266449   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:10.266485   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:10.282204   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:10.282234   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:10.353974   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:10.354004   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:10.354017   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:09.802047   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.802200   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.433722   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:14.932285   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.780394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:13.781691   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.936509   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:12.951547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:12.951616   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:12.987833   71929 cri.go:89] found id: ""
	I0717 01:59:12.987860   71929 logs.go:276] 0 containers: []
	W0717 01:59:12.987868   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:12.987873   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:12.987922   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:13.026500   71929 cri.go:89] found id: ""
	I0717 01:59:13.026529   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.026539   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:13.026546   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:13.026625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:13.061631   71929 cri.go:89] found id: ""
	I0717 01:59:13.061664   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.061674   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:13.061682   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:13.061745   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:13.099449   71929 cri.go:89] found id: ""
	I0717 01:59:13.099476   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.099487   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:13.099494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:13.099565   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:13.137271   71929 cri.go:89] found id: ""
	I0717 01:59:13.137299   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.137309   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:13.137317   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:13.137384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:13.174432   71929 cri.go:89] found id: ""
	I0717 01:59:13.174462   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.174472   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:13.174478   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:13.174539   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:13.212820   71929 cri.go:89] found id: ""
	I0717 01:59:13.212845   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.212855   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:13.212865   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:13.212930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:13.245961   71929 cri.go:89] found id: ""
	I0717 01:59:13.245993   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.246004   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:13.246014   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:13.246028   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:13.284801   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:13.284828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.338476   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:13.338511   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:13.352751   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:13.352777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:13.434001   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:13.434035   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:13.434050   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.022525   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:16.036863   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:16.036941   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:16.074370   71929 cri.go:89] found id: ""
	I0717 01:59:16.074398   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.074409   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:16.074416   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:16.074476   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:16.112239   71929 cri.go:89] found id: ""
	I0717 01:59:16.112267   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.112276   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:16.112281   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:16.112329   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:16.147398   71929 cri.go:89] found id: ""
	I0717 01:59:16.147422   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.147429   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:16.147435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:16.147490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:16.182112   71929 cri.go:89] found id: ""
	I0717 01:59:16.182141   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.182149   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:16.182155   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:16.182203   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:16.219134   71929 cri.go:89] found id: ""
	I0717 01:59:16.219163   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.219174   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:16.219182   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:16.219238   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:16.255892   71929 cri.go:89] found id: ""
	I0717 01:59:16.255924   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.255934   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:16.255943   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:16.256003   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:16.291202   71929 cri.go:89] found id: ""
	I0717 01:59:16.291228   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.291238   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:16.291245   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:16.291304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:16.330748   71929 cri.go:89] found id: ""
	I0717 01:59:16.330779   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.330790   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:16.330801   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:16.330815   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:16.344628   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:16.344668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:16.415735   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:16.415761   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:16.415775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.499411   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:16.499449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:16.541244   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:16.541270   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.802477   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.311229   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.933493   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.934299   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.279421   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.778998   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:19.095060   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:19.107920   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:19.107976   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:19.143446   71929 cri.go:89] found id: ""
	I0717 01:59:19.143476   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.143485   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:19.143490   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:19.143550   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:19.179216   71929 cri.go:89] found id: ""
	I0717 01:59:19.179247   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.179259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:19.179266   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:19.179317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:19.212468   71929 cri.go:89] found id: ""
	I0717 01:59:19.212498   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.212508   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:19.212516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:19.212574   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:19.245019   71929 cri.go:89] found id: ""
	I0717 01:59:19.245047   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.245058   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:19.245065   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:19.245123   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:19.278430   71929 cri.go:89] found id: ""
	I0717 01:59:19.278457   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.278467   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:19.278474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:19.278530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:19.317685   71929 cri.go:89] found id: ""
	I0717 01:59:19.317714   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.317722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:19.317729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:19.317783   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:19.352938   71929 cri.go:89] found id: ""
	I0717 01:59:19.352974   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.352986   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:19.353000   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:19.353052   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:19.387238   71929 cri.go:89] found id: ""
	I0717 01:59:19.387272   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.387283   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:19.387295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:19.387314   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:19.440138   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:19.440171   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:19.456372   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:19.456402   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:19.527881   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:19.527906   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:19.527921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:19.611903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:19.611937   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:22.160422   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:22.172802   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:22.172862   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:22.209283   71929 cri.go:89] found id: ""
	I0717 01:59:22.209315   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.209327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:22.209335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:22.209396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:22.243927   71929 cri.go:89] found id: ""
	I0717 01:59:22.243955   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.243965   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:22.243972   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:22.244022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:22.276730   71929 cri.go:89] found id: ""
	I0717 01:59:22.276754   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.276761   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:22.276767   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:22.276814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:22.319378   71929 cri.go:89] found id: ""
	I0717 01:59:22.319407   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.319418   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:22.319425   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:22.319482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:22.358272   71929 cri.go:89] found id: ""
	I0717 01:59:22.358298   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.358307   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:22.358312   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:22.358362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:22.395358   71929 cri.go:89] found id: ""
	I0717 01:59:22.395393   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.395405   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:22.395414   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:22.395477   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:18.802881   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.303532   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.433636   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.932345   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.279596   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.279700   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.280649   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:22.435158   71929 cri.go:89] found id: ""
	I0717 01:59:22.435184   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.435194   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:22.435201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:22.435248   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:22.471553   71929 cri.go:89] found id: ""
	I0717 01:59:22.471588   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.471595   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:22.471604   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:22.471616   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:22.523133   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:22.523169   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:22.539212   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:22.539246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:22.615707   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:22.615729   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:22.615744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:22.696758   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:22.696795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:25.238496   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:25.252882   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:25.252946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:25.290173   71929 cri.go:89] found id: ""
	I0717 01:59:25.290197   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.290205   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:25.290210   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:25.290263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:25.325926   71929 cri.go:89] found id: ""
	I0717 01:59:25.325968   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.325979   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:25.325985   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:25.326032   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:25.358310   71929 cri.go:89] found id: ""
	I0717 01:59:25.358362   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.358371   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:25.358377   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:25.358426   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:25.393575   71929 cri.go:89] found id: ""
	I0717 01:59:25.393605   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.393615   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:25.393622   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:25.393684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:25.429357   71929 cri.go:89] found id: ""
	I0717 01:59:25.429448   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.429466   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:25.429474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:25.429546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:25.466992   71929 cri.go:89] found id: ""
	I0717 01:59:25.467020   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.467028   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:25.467034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:25.467080   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:25.503545   71929 cri.go:89] found id: ""
	I0717 01:59:25.503575   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.503587   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:25.503594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:25.503643   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:25.542957   71929 cri.go:89] found id: ""
	I0717 01:59:25.542983   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.542993   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:25.543003   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:25.543015   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:25.598813   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:25.598852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:25.618060   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:25.618098   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:25.690079   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:25.690105   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:25.690119   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:25.765956   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:25.765994   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:23.803366   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.804525   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.932447   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.933276   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.933461   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.286160   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.781318   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:28.311715   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:28.325493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:28.325554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:28.365783   71929 cri.go:89] found id: ""
	I0717 01:59:28.365810   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.365821   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:28.365829   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:28.365885   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:28.401847   71929 cri.go:89] found id: ""
	I0717 01:59:28.401875   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.401883   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:28.401895   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:28.401954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:28.442236   71929 cri.go:89] found id: ""
	I0717 01:59:28.442261   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.442272   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:28.442278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:28.442340   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:28.476832   71929 cri.go:89] found id: ""
	I0717 01:59:28.476857   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.476866   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:28.476873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:28.476928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:28.512040   71929 cri.go:89] found id: ""
	I0717 01:59:28.512068   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.512075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:28.512081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:28.512136   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:28.547516   71929 cri.go:89] found id: ""
	I0717 01:59:28.547547   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.547558   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:28.547566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:28.547625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:28.580380   71929 cri.go:89] found id: ""
	I0717 01:59:28.580406   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.580417   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:28.580427   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:28.580485   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:28.616029   71929 cri.go:89] found id: ""
	I0717 01:59:28.616059   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.616069   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:28.616080   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:28.616095   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:28.670188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:28.670230   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:28.687315   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:28.687355   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:28.763591   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:28.763612   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:28.763627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:28.848925   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:28.848959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:31.388294   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:31.404748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:31.404814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:31.446437   71929 cri.go:89] found id: ""
	I0717 01:59:31.446468   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.446478   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:31.446484   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:31.446531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:31.487797   71929 cri.go:89] found id: ""
	I0717 01:59:31.487828   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.487840   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:31.487847   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:31.487895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:31.525327   71929 cri.go:89] found id: ""
	I0717 01:59:31.525354   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.525368   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:31.525375   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:31.525436   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:31.564106   71929 cri.go:89] found id: ""
	I0717 01:59:31.564154   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.564166   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:31.564173   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:31.564234   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:31.603345   71929 cri.go:89] found id: ""
	I0717 01:59:31.603374   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.603385   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:31.603393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:31.603456   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:31.641727   71929 cri.go:89] found id: ""
	I0717 01:59:31.641753   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.641769   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:31.641776   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:31.641837   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:31.680825   71929 cri.go:89] found id: ""
	I0717 01:59:31.680856   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.680866   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:31.680873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:31.680930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:31.714325   71929 cri.go:89] found id: ""
	I0717 01:59:31.714348   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.714355   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:31.714363   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:31.714374   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:31.765899   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:31.765934   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:31.781417   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:31.781447   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:31.857586   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:31.857607   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:31.857622   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:31.937171   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:31.937197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:28.304014   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:30.802684   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:32.803604   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.933945   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.435259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.785641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.279814   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.478176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:34.492153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:34.492223   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:34.526959   71929 cri.go:89] found id: ""
	I0717 01:59:34.526984   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.526998   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:34.527006   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:34.527064   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:34.564485   71929 cri.go:89] found id: ""
	I0717 01:59:34.564534   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.564546   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:34.564591   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:34.564706   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:34.604611   71929 cri.go:89] found id: ""
	I0717 01:59:34.604637   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.604649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:34.604657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:34.604718   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:34.640851   71929 cri.go:89] found id: ""
	I0717 01:59:34.640882   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.640892   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:34.640897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:34.640956   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:34.675828   71929 cri.go:89] found id: ""
	I0717 01:59:34.675856   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.675863   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:34.675869   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:34.675918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:34.710468   71929 cri.go:89] found id: ""
	I0717 01:59:34.710496   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.710506   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:34.710514   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:34.710595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:34.749218   71929 cri.go:89] found id: ""
	I0717 01:59:34.749249   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.749260   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:34.749267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:34.749328   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:34.784934   71929 cri.go:89] found id: ""
	I0717 01:59:34.784969   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.784979   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:34.784990   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:34.785006   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:34.799836   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:34.799870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:34.870218   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:34.870239   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:34.870254   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:34.948782   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:34.948817   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:34.992295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:34.992324   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:34.803649   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.304530   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.933199   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:39.432504   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.280185   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:38.280499   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.545759   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:37.559648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:37.559724   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:37.596642   71929 cri.go:89] found id: ""
	I0717 01:59:37.596696   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.596707   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:37.596715   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:37.596770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:37.637251   71929 cri.go:89] found id: ""
	I0717 01:59:37.637283   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.637312   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:37.637318   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:37.637372   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:37.672811   71929 cri.go:89] found id: ""
	I0717 01:59:37.672839   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.672847   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:37.672852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:37.672909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:37.706864   71929 cri.go:89] found id: ""
	I0717 01:59:37.706904   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.706916   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:37.706923   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:37.706975   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:37.747539   71929 cri.go:89] found id: ""
	I0717 01:59:37.747567   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.747576   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:37.747581   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:37.747630   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:37.785229   71929 cri.go:89] found id: ""
	I0717 01:59:37.785251   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.785260   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:37.785268   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:37.785333   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:37.840428   71929 cri.go:89] found id: ""
	I0717 01:59:37.840460   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.840471   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:37.840477   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:37.840533   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:37.876888   71929 cri.go:89] found id: ""
	I0717 01:59:37.876916   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.876924   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:37.876932   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:37.876942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:37.926161   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:37.926197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:37.940857   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:37.940885   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:38.019210   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:38.019232   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:38.019245   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:38.112428   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:38.112471   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:40.657215   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:40.670824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:40.670900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:40.704008   71929 cri.go:89] found id: ""
	I0717 01:59:40.704030   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.704040   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:40.704048   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:40.704102   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:40.739544   71929 cri.go:89] found id: ""
	I0717 01:59:40.739576   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.739587   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:40.739595   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:40.739664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:40.773132   71929 cri.go:89] found id: ""
	I0717 01:59:40.773159   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.773169   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:40.773177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:40.773239   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:40.810162   71929 cri.go:89] found id: ""
	I0717 01:59:40.810183   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.810193   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:40.810200   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:40.810256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:40.844797   71929 cri.go:89] found id: ""
	I0717 01:59:40.844829   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.844840   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:40.844847   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:40.844918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:40.884444   71929 cri.go:89] found id: ""
	I0717 01:59:40.884468   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.884476   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:40.884482   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:40.884544   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:40.919413   71929 cri.go:89] found id: ""
	I0717 01:59:40.919437   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.919445   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:40.919451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:40.919505   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:40.961870   71929 cri.go:89] found id: ""
	I0717 01:59:40.961894   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.961902   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:40.961910   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:40.961921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:41.010600   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:41.010638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:41.025557   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:41.025589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:41.100100   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:41.100123   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:41.100135   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:41.185809   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:41.185850   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:39.802297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.802803   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.432998   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.433412   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:40.779796   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:42.781981   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.279014   71522 pod_ready.go:81] duration metric: took 4m0.006085275s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:59:43.279043   71522 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:59:43.279053   71522 pod_ready.go:38] duration metric: took 4m2.008175999s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:59:43.279073   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:59:43.279105   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.279162   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.327674   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:43.327725   71522 cri.go:89] found id: ""
	I0717 01:59:43.327734   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:43.327801   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.332247   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.332303   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.371598   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.371627   71522 cri.go:89] found id: ""
	I0717 01:59:43.371635   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:43.371683   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.377203   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.377265   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.416351   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.416374   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.416380   71522 cri.go:89] found id: ""
	I0717 01:59:43.416389   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:43.416448   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.420909   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.425228   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.425278   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.472117   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.472139   71522 cri.go:89] found id: ""
	I0717 01:59:43.472147   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:43.472194   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.476632   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.476698   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.517337   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:43.517360   71522 cri.go:89] found id: ""
	I0717 01:59:43.517369   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:43.517430   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.522437   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.522519   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.564511   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.564530   71522 cri.go:89] found id: ""
	I0717 01:59:43.564537   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:43.564595   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.570357   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.570440   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:43.615389   71522 cri.go:89] found id: ""
	I0717 01:59:43.615418   71522 logs.go:276] 0 containers: []
	W0717 01:59:43.615427   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:43.615433   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:43.615543   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:43.652739   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:43.652764   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:43.652769   71522 cri.go:89] found id: ""
	I0717 01:59:43.652777   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:43.652835   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.657323   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.661682   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:43.661702   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.714396   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:43.714434   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.761072   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:43.761110   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:43.825934   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:43.825963   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.871287   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:43.871316   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.907488   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:43.907517   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.949876   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:43.949903   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:44.093084   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:44.093116   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:44.153161   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:44.153206   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:44.197219   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:44.197249   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:44.242441   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:44.242478   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:44.288622   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.288646   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.839680   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.839712   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.854119   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:44.854145   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.725542   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:43.739304   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.739379   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.776754   71929 cri.go:89] found id: ""
	I0717 01:59:43.776783   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.776795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:43.776802   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.776863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.819729   71929 cri.go:89] found id: ""
	I0717 01:59:43.819756   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.819767   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:43.819774   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.819828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.860283   71929 cri.go:89] found id: ""
	I0717 01:59:43.860311   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.860322   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:43.860329   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.860391   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.898684   71929 cri.go:89] found id: ""
	I0717 01:59:43.898712   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.898722   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:43.898729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.898788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.942996   71929 cri.go:89] found id: ""
	I0717 01:59:43.943019   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.943026   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:43.943031   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.943077   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.981799   71929 cri.go:89] found id: ""
	I0717 01:59:43.981828   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.981839   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:43.981846   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.981903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:44.018222   71929 cri.go:89] found id: ""
	I0717 01:59:44.018252   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.018262   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:44.018267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:44.018326   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:44.056264   71929 cri.go:89] found id: ""
	I0717 01:59:44.056293   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.056304   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:44.056315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.056334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.172061   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:44.172108   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:44.219597   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:44.219627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:44.272299   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.272334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.287811   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:44.287848   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:44.379183   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:46.879529   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:46.893142   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:46.893207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:46.929073   71929 cri.go:89] found id: ""
	I0717 01:59:46.929101   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.929113   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:46.929121   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:46.929173   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:46.963697   71929 cri.go:89] found id: ""
	I0717 01:59:46.963725   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.963733   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:46.963739   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:46.963798   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.000697   71929 cri.go:89] found id: ""
	I0717 01:59:47.000730   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.000747   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:47.000752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.000804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.037270   71929 cri.go:89] found id: ""
	I0717 01:59:47.037304   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.037316   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:47.037323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.037382   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.072210   71929 cri.go:89] found id: ""
	I0717 01:59:47.072238   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.072249   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:47.072256   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.072321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.108404   71929 cri.go:89] found id: ""
	I0717 01:59:47.108432   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.108443   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:47.108451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.108535   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.146122   71929 cri.go:89] found id: ""
	I0717 01:59:47.146151   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.146162   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.146169   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:47.146225   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:47.187418   71929 cri.go:89] found id: ""
	I0717 01:59:47.187446   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.187455   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:47.187466   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:47.187481   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:47.201023   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:47.201053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:47.269851   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:47.269878   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.269894   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:47.356417   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:47.356456   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.803326   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:46.302939   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:45.433688   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.933271   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:49.934222   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.403005   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:47.420984   71522 api_server.go:72] duration metric: took 4m13.369710312s to wait for apiserver process to appear ...
	I0717 01:59:47.421011   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:59:47.421065   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:47.421128   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:47.465800   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:47.465830   71522 cri.go:89] found id: ""
	I0717 01:59:47.465838   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:47.465890   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.470561   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:47.470628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:47.513302   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:47.513321   71522 cri.go:89] found id: ""
	I0717 01:59:47.513328   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:47.513373   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.517668   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:47.517720   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.563466   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:47.563491   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:47.563495   71522 cri.go:89] found id: ""
	I0717 01:59:47.563502   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:47.563563   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.568058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.572381   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.572432   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.618919   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:47.618944   71522 cri.go:89] found id: ""
	I0717 01:59:47.618953   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:47.619014   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.623475   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.623525   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.662294   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:47.662321   71522 cri.go:89] found id: ""
	I0717 01:59:47.662329   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:47.662384   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.666740   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.666806   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.708962   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:47.708990   71522 cri.go:89] found id: ""
	I0717 01:59:47.708999   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:47.709058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.713551   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.713628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.750766   71522 cri.go:89] found id: ""
	I0717 01:59:47.750797   71522 logs.go:276] 0 containers: []
	W0717 01:59:47.750807   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.750814   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:47.750878   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:47.786664   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:47.786687   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:47.786692   71522 cri.go:89] found id: ""
	I0717 01:59:47.786699   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:47.786761   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.791460   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.795553   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.795576   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:48.298229   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:48.298271   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:48.313542   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:48.313573   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:48.429625   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:48.429663   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:48.475651   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:48.475677   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:48.514075   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:48.514101   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:48.550152   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:48.550182   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:48.592743   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:48.592771   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:48.652433   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:48.652464   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:48.699763   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:48.699796   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:48.737467   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:48.737504   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:48.788389   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:48.788425   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:48.842323   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:48.842357   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:48.900716   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:48.900746   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:47.397763   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:47.397791   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:49.954670   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:49.968840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:49.968898   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:50.003598   71929 cri.go:89] found id: ""
	I0717 01:59:50.003635   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.003646   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:50.003654   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:50.003714   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:50.040494   71929 cri.go:89] found id: ""
	I0717 01:59:50.040546   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.040558   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:50.040564   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:50.040624   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:50.074921   71929 cri.go:89] found id: ""
	I0717 01:59:50.074950   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.074959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:50.074965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:50.075015   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:50.117002   71929 cri.go:89] found id: ""
	I0717 01:59:50.117030   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.117041   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:50.117049   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:50.117106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:50.163026   71929 cri.go:89] found id: ""
	I0717 01:59:50.163052   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.163063   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:50.163071   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:50.163129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:50.197709   71929 cri.go:89] found id: ""
	I0717 01:59:50.197738   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.197749   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:50.197757   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:50.197838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:50.237776   71929 cri.go:89] found id: ""
	I0717 01:59:50.237808   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.237819   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:50.237827   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:50.237886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:50.275147   71929 cri.go:89] found id: ""
	I0717 01:59:50.275179   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.275189   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:50.275201   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:50.275215   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:50.329025   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:50.329057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:50.342745   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:50.342777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:50.417792   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:50.417817   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:50.417829   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:50.495288   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:50.495322   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:48.306102   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:50.804255   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:52.433248   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:54.931595   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:51.447495   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:59:51.452186   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:59:51.453112   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:59:51.453137   71522 api_server.go:131] duration metric: took 4.032118004s to wait for apiserver health ...
	I0717 01:59:51.453146   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:59:51.453170   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:51.453215   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:51.491272   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:51.491297   71522 cri.go:89] found id: ""
	I0717 01:59:51.491305   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:51.491365   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.495747   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:51.495795   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:51.538807   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:51.538830   71522 cri.go:89] found id: ""
	I0717 01:59:51.538838   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:51.538891   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.543454   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:51.543512   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:51.586258   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:51.586292   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:51.586296   71522 cri.go:89] found id: ""
	I0717 01:59:51.586306   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:51.586360   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.590446   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.594867   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:51.594936   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:51.636079   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:51.636101   71522 cri.go:89] found id: ""
	I0717 01:59:51.636108   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:51.636159   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.640225   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:51.640283   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:51.676395   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:51.676422   71522 cri.go:89] found id: ""
	I0717 01:59:51.676432   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:51.676496   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.680974   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:51.681043   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:51.720449   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.720476   71522 cri.go:89] found id: ""
	I0717 01:59:51.720483   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:51.720527   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.724704   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:51.724779   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:51.762892   71522 cri.go:89] found id: ""
	I0717 01:59:51.762923   71522 logs.go:276] 0 containers: []
	W0717 01:59:51.762932   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:51.762939   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:51.762986   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:51.803675   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.803702   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.803707   71522 cri.go:89] found id: ""
	I0717 01:59:51.803714   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:51.803807   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.808188   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.812046   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:51.812065   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:51.855800   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:51.855832   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.917804   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:51.917833   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.958797   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:51.958827   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.997003   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:51.997034   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:52.118345   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:52.118381   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:52.174308   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:52.174344   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:52.578823   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:52.578857   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:52.619962   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:52.619994   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:52.667564   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:52.667593   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:52.714716   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:52.714747   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:52.774123   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:52.774171   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:52.788399   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:52.788432   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:52.839796   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:52.839828   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:55.388404   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:59:55.388441   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.388448   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.388453   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.388458   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.388465   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.388469   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.388473   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.388484   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.388491   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.388499   71522 system_pods.go:74] duration metric: took 3.93534618s to wait for pod list to return data ...
	I0717 01:59:55.388509   71522 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:59:55.390798   71522 default_sa.go:45] found service account: "default"
	I0717 01:59:55.390829   71522 default_sa.go:55] duration metric: took 2.313714ms for default service account to be created ...
	I0717 01:59:55.390840   71522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:59:55.399028   71522 system_pods.go:86] 9 kube-system pods found
	I0717 01:59:55.399049   71522 system_pods.go:89] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.399054   71522 system_pods.go:89] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.399059   71522 system_pods.go:89] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.399063   71522 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.399068   71522 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.399072   71522 system_pods.go:89] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.399076   71522 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.399083   71522 system_pods.go:89] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.399090   71522 system_pods.go:89] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.399099   71522 system_pods.go:126] duration metric: took 8.253468ms to wait for k8s-apps to be running ...
	I0717 01:59:55.399108   71522 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:59:55.399152   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:59:55.417081   71522 system_svc.go:56] duration metric: took 17.965716ms WaitForService to wait for kubelet
	I0717 01:59:55.417109   71522 kubeadm.go:582] duration metric: took 4m21.36584166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:59:55.417130   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:59:55.420078   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:59:55.420099   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:59:55.420109   71522 node_conditions.go:105] duration metric: took 2.974324ms to run NodePressure ...
	I0717 01:59:55.420119   71522 start.go:241] waiting for startup goroutines ...
	I0717 01:59:55.420126   71522 start.go:246] waiting for cluster config update ...
	I0717 01:59:55.420136   71522 start.go:255] writing updated cluster config ...
	I0717 01:59:55.420406   71522 ssh_runner.go:195] Run: rm -f paused
	I0717 01:59:55.470793   71522 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:59:55.472960   71522 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-738184" cluster and "default" namespace by default
	I0717 01:59:53.036151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:53.049820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:53.049879   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:53.087144   71929 cri.go:89] found id: ""
	I0717 01:59:53.087175   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.087189   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:53.087195   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:53.087253   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:53.123135   71929 cri.go:89] found id: ""
	I0717 01:59:53.123164   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.123175   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:53.123191   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:53.123254   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:53.157887   71929 cri.go:89] found id: ""
	I0717 01:59:53.157912   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.157922   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:53.157927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:53.158004   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:53.201002   71929 cri.go:89] found id: ""
	I0717 01:59:53.201033   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.201045   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:53.201054   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:53.201115   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:53.236159   71929 cri.go:89] found id: ""
	I0717 01:59:53.236188   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.236198   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:53.236204   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:53.236258   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:53.277585   71929 cri.go:89] found id: ""
	I0717 01:59:53.277616   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.277627   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:53.277634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:53.277694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:53.322722   71929 cri.go:89] found id: ""
	I0717 01:59:53.322747   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.322758   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:53.322765   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:53.322824   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:53.364112   71929 cri.go:89] found id: ""
	I0717 01:59:53.364138   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.364149   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:53.364159   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:53.364172   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:53.418701   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:53.418739   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:53.435004   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:53.435030   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:53.511254   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:53.511274   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:53.511287   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:53.587967   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:53.588003   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:56.130773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:56.144742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:56.144811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:56.180267   71929 cri.go:89] found id: ""
	I0717 01:59:56.180295   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.180306   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:56.180313   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:56.180373   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:56.217223   71929 cri.go:89] found id: ""
	I0717 01:59:56.217252   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.217263   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:56.217269   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:56.217334   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:56.251714   71929 cri.go:89] found id: ""
	I0717 01:59:56.251738   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.251745   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:56.251752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:56.251805   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:56.292557   71929 cri.go:89] found id: ""
	I0717 01:59:56.292589   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.292597   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:56.292603   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:56.292653   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:56.332463   71929 cri.go:89] found id: ""
	I0717 01:59:56.332491   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.332501   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:56.332508   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:56.332562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:56.372155   71929 cri.go:89] found id: ""
	I0717 01:59:56.372180   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.372189   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:56.372197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:56.372255   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:56.415768   71929 cri.go:89] found id: ""
	I0717 01:59:56.415794   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.415806   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:56.415813   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:56.415871   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:56.456920   71929 cri.go:89] found id: ""
	I0717 01:59:56.456951   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.456959   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:56.456968   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:56.456978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:56.508932   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:56.508965   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:56.522496   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:56.522531   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:56.596839   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:56.596857   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:56.596870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:56.679237   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:56.679271   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:53.303565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:55.803725   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:57.806129   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:56.933245   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.432536   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.220084   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:59.233108   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:59.233182   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:59.266796   71929 cri.go:89] found id: ""
	I0717 01:59:59.266827   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.266838   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:59.266845   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:59.266909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:59.297992   71929 cri.go:89] found id: ""
	I0717 01:59:59.298017   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.298026   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:59.298032   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:59.298087   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:59.331953   71929 cri.go:89] found id: ""
	I0717 01:59:59.331982   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.331993   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:59.331999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:59.332069   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:59.368912   71929 cri.go:89] found id: ""
	I0717 01:59:59.368939   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.368948   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:59.368954   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:59.369002   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:59.402886   71929 cri.go:89] found id: ""
	I0717 01:59:59.402911   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.402920   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:59.402926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:59.402982   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:59.441227   71929 cri.go:89] found id: ""
	I0717 01:59:59.441249   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.441257   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:59.441263   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:59.441322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:59.479154   71929 cri.go:89] found id: ""
	I0717 01:59:59.479191   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.479213   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:59.479222   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:59.479286   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:59.516259   71929 cri.go:89] found id: ""
	I0717 01:59:59.516299   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.516309   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:59.516319   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:59.516332   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:59.596352   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:59.596385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:59.639712   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:59.639744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:59.691399   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:59.691444   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:59.706618   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:59.706648   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:59.778875   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.279246   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:02.293212   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:02.293284   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:02.330759   71929 cri.go:89] found id: ""
	I0717 02:00:02.330786   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.330795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:02.330800   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:02.330848   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:02.366257   71929 cri.go:89] found id: ""
	I0717 02:00:02.366287   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.366298   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:02.366305   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:02.366368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:00.303868   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.311063   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:01.432671   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:03.433059   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.404321   71929 cri.go:89] found id: ""
	I0717 02:00:02.404348   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.404358   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:02.404364   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:02.404432   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:02.444297   71929 cri.go:89] found id: ""
	I0717 02:00:02.444326   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.444342   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:02.444349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:02.444406   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:02.478433   71929 cri.go:89] found id: ""
	I0717 02:00:02.478466   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.478477   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:02.478483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:02.478530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:02.515519   71929 cri.go:89] found id: ""
	I0717 02:00:02.515551   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.515560   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:02.515566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:02.515618   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:02.551006   71929 cri.go:89] found id: ""
	I0717 02:00:02.551030   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.551038   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:02.551044   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:02.551110   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:02.588312   71929 cri.go:89] found id: ""
	I0717 02:00:02.588345   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.588356   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:02.588367   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:02.588381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:02.641900   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:02.641932   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:02.656851   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:02.656896   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:02.728286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.728315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:02.728327   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:02.806807   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:02.806847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.355196   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:05.369148   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:05.369231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:05.405012   71929 cri.go:89] found id: ""
	I0717 02:00:05.405045   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.405057   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:05.405068   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:05.405132   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:05.450524   71929 cri.go:89] found id: ""
	I0717 02:00:05.450564   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.450575   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:05.450582   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:05.450637   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:05.487503   71929 cri.go:89] found id: ""
	I0717 02:00:05.487533   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.487544   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:05.487553   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:05.487634   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:05.522607   71929 cri.go:89] found id: ""
	I0717 02:00:05.522635   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.522650   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:05.522656   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:05.522703   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:05.558091   71929 cri.go:89] found id: ""
	I0717 02:00:05.558120   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.558131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:05.558138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:05.558192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:05.594540   71929 cri.go:89] found id: ""
	I0717 02:00:05.594587   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.594598   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:05.594605   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:05.594668   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:05.631783   71929 cri.go:89] found id: ""
	I0717 02:00:05.631807   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.631818   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:05.631825   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:05.631886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:05.667494   71929 cri.go:89] found id: ""
	I0717 02:00:05.667523   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.667532   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:05.667543   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:05.667559   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:05.681348   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:05.681373   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:05.747143   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:05.747165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:05.747176   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:05.829639   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:05.829674   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.881984   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:05.882013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:04.803913   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.302068   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:05.434869   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.435174   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:09.931879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:08.435873   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:08.449840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:08.449901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:08.489613   71929 cri.go:89] found id: ""
	I0717 02:00:08.489663   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.489675   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:08.489684   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:08.489751   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:08.526604   71929 cri.go:89] found id: ""
	I0717 02:00:08.526635   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.526645   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:08.526660   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:08.526717   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:08.563202   71929 cri.go:89] found id: ""
	I0717 02:00:08.563227   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.563234   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:08.563240   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:08.563299   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:08.598336   71929 cri.go:89] found id: ""
	I0717 02:00:08.598365   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.598376   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:08.598383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:08.598441   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:08.632626   71929 cri.go:89] found id: ""
	I0717 02:00:08.632660   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.632671   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:08.632678   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:08.632739   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:08.667951   71929 cri.go:89] found id: ""
	I0717 02:00:08.667977   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.667993   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:08.668001   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:08.668059   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:08.702106   71929 cri.go:89] found id: ""
	I0717 02:00:08.702135   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.702146   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:08.702153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:08.702212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:08.733469   71929 cri.go:89] found id: ""
	I0717 02:00:08.733491   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.733499   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:08.733508   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:08.733518   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:08.787930   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:08.787966   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:08.802761   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:08.802795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:08.878115   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:08.878138   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:08.878149   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:08.962509   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:08.962543   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:11.503151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:11.518019   71929 kubeadm.go:597] duration metric: took 4m3.576613508s to restartPrimaryControlPlane
	W0717 02:00:11.518087   71929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:00:11.518113   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:00:11.970514   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:11.986794   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:00:11.997382   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:00:12.006789   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:00:12.006816   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:00:12.006867   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:00:12.015864   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:00:12.015921   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:00:12.025239   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:00:12.034315   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:00:12.034373   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:00:12.043533   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.052344   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:00:12.052393   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.061290   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:00:12.070311   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:00:12.070375   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:00:12.080404   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:00:12.318084   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:00:09.303502   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.803893   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.933539   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:14.433949   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:13.804007   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.303079   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.932416   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.932721   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.303306   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:20.306811   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:22.803374   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:21.433157   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:23.433283   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:24.805822   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.301985   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:25.931740   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.934346   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:29.302199   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:31.302607   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:30.433033   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:32.434743   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:34.933166   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:33.802140   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:35.803338   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:36.933672   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:39.432879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:38.302050   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:40.803322   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:41.932491   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:44.436201   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:43.302028   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:45.801979   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.303644   71146 pod_ready.go:81] duration metric: took 4m0.007411484s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 02:00:47.303668   71146 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 02:00:47.303678   71146 pod_ready.go:38] duration metric: took 4m7.053721739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:00:47.303694   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:00:47.303725   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:47.303791   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:47.365247   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:47.365272   71146 cri.go:89] found id: ""
	I0717 02:00:47.365279   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:47.365339   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.370201   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:47.370268   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:47.416627   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:47.416652   71146 cri.go:89] found id: ""
	I0717 02:00:47.416663   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:47.416731   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.421295   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:47.421454   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:47.463532   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.463556   71146 cri.go:89] found id: ""
	I0717 02:00:47.463564   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:47.463626   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.468291   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:47.468414   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:47.504328   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:47.504354   71146 cri.go:89] found id: ""
	I0717 02:00:47.504362   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:47.504445   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.508821   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:47.508880   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:47.550970   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.550996   71146 cri.go:89] found id: ""
	I0717 02:00:47.551006   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:47.551069   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.555974   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:47.556045   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:47.609884   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:47.609903   71146 cri.go:89] found id: ""
	I0717 02:00:47.609910   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:47.609968   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.615544   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:47.615603   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:47.653071   71146 cri.go:89] found id: ""
	I0717 02:00:47.653099   71146 logs.go:276] 0 containers: []
	W0717 02:00:47.653110   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:47.653117   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:47.653163   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:47.690462   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.690485   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:47.690490   71146 cri.go:89] found id: ""
	I0717 02:00:47.690498   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:47.690545   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.695196   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.699099   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:47.699117   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:47.816750   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:47.816782   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:46.932764   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:49.432402   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.869306   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:47.869341   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.906717   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:47.906755   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.944125   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:47.944152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.978632   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:47.978664   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:48.482628   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:48.482660   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:48.538252   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:48.538300   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:48.553011   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:48.553038   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:48.607632   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:48.607666   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:48.646122   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:48.646151   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:48.689948   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:48.689980   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:48.738285   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:48.738334   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.290996   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:51.308850   71146 api_server.go:72] duration metric: took 4m18.27461618s to wait for apiserver process to appear ...
	I0717 02:00:51.308873   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:00:51.308907   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:51.308967   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:51.350827   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.350857   71146 cri.go:89] found id: ""
	I0717 02:00:51.350866   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:51.350930   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.355308   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:51.355369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:51.393804   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.393831   71146 cri.go:89] found id: ""
	I0717 02:00:51.393840   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:51.393897   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.398144   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:51.398201   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:51.437974   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:51.437991   71146 cri.go:89] found id: ""
	I0717 02:00:51.437998   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:51.438044   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.442318   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:51.442382   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:51.478462   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.478481   71146 cri.go:89] found id: ""
	I0717 02:00:51.478489   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:51.478534   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.482624   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:51.482672   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:51.526089   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.526114   71146 cri.go:89] found id: ""
	I0717 02:00:51.526123   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:51.526170   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.530855   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:51.530923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:51.568875   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.568899   71146 cri.go:89] found id: ""
	I0717 02:00:51.568908   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:51.568972   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.573300   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:51.573369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:51.615775   71146 cri.go:89] found id: ""
	I0717 02:00:51.615800   71146 logs.go:276] 0 containers: []
	W0717 02:00:51.615809   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:51.615815   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:51.615876   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:51.658100   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:51.658124   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:51.658130   71146 cri.go:89] found id: ""
	I0717 02:00:51.658138   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:51.658183   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.663030   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.667348   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:51.667372   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.715502   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:51.715534   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.763431   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:51.763457   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.805523   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:51.805553   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.859660   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:51.859692   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:51.963831   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:51.963858   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:51.978152   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:51.978179   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:52.023897   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:52.023926   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:52.062193   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:52.062218   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:52.098487   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:52.098518   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:52.135733   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:52.135758   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:52.562245   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:52.562279   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:52.624258   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:52.624288   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:51.434060   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:53.933730   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:55.176270   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 02:00:55.180760   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 02:00:55.181928   71146 api_server.go:141] control plane version: v1.30.2
	I0717 02:00:55.181947   71146 api_server.go:131] duration metric: took 3.873068874s to wait for apiserver health ...
	I0717 02:00:55.181955   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:00:55.181975   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:55.182017   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:55.218028   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:55.218059   71146 cri.go:89] found id: ""
	I0717 02:00:55.218068   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:55.218125   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.222841   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:55.222911   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:55.265613   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.265638   71146 cri.go:89] found id: ""
	I0717 02:00:55.265647   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:55.265699   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.269866   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:55.269923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:55.306363   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:55.306390   71146 cri.go:89] found id: ""
	I0717 02:00:55.306400   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:55.306457   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.310843   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:55.310901   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:55.354417   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:55.354439   71146 cri.go:89] found id: ""
	I0717 02:00:55.354449   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:55.354503   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.358988   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:55.359038   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:55.396457   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.396480   71146 cri.go:89] found id: ""
	I0717 02:00:55.396488   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:55.396532   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.401185   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:55.401244   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:55.438249   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:55.438276   71146 cri.go:89] found id: ""
	I0717 02:00:55.438286   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:55.438344   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.442967   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:55.443048   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:55.484173   71146 cri.go:89] found id: ""
	I0717 02:00:55.484197   71146 logs.go:276] 0 containers: []
	W0717 02:00:55.484205   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:55.484210   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:55.484288   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:55.525757   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.525780   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.525784   71146 cri.go:89] found id: ""
	I0717 02:00:55.525790   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:55.525842   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.530253   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.534253   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:55.534275   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.578993   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:55.579018   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.622746   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:55.622771   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.660900   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:55.660931   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.709803   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:55.709833   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:56.092339   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:56.092390   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:56.130951   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:56.130976   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:56.186113   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:56.186152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:56.229794   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:56.229839   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:56.285798   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:56.285845   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:56.300391   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:56.300421   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:56.425621   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:56.425653   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:56.478853   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:56.478882   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:59.026000   71146 system_pods.go:59] 8 kube-system pods found
	I0717 02:00:59.026028   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.026033   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.026036   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.026039   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.026042   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.026045   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.026051   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.026054   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.026062   71146 system_pods.go:74] duration metric: took 3.844102201s to wait for pod list to return data ...
	I0717 02:00:59.026069   71146 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:00:59.028810   71146 default_sa.go:45] found service account: "default"
	I0717 02:00:59.028831   71146 default_sa.go:55] duration metric: took 2.756364ms for default service account to be created ...
	I0717 02:00:59.028838   71146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:00:59.036427   71146 system_pods.go:86] 8 kube-system pods found
	I0717 02:00:59.036457   71146 system_pods.go:89] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.036466   71146 system_pods.go:89] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.036474   71146 system_pods.go:89] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.036482   71146 system_pods.go:89] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.036489   71146 system_pods.go:89] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.036499   71146 system_pods.go:89] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.036509   71146 system_pods.go:89] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.036519   71146 system_pods.go:89] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.036532   71146 system_pods.go:126] duration metric: took 7.688074ms to wait for k8s-apps to be running ...
	I0717 02:00:59.036542   71146 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:00:59.036594   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:59.052023   71146 system_svc.go:56] duration metric: took 15.474441ms WaitForService to wait for kubelet
	I0717 02:00:59.052049   71146 kubeadm.go:582] duration metric: took 4m26.017816269s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:00:59.052073   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:00:59.054763   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:00:59.054784   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 02:00:59.054795   71146 node_conditions.go:105] duration metric: took 2.714349ms to run NodePressure ...
	I0717 02:00:59.054805   71146 start.go:241] waiting for startup goroutines ...
	I0717 02:00:59.054811   71146 start.go:246] waiting for cluster config update ...
	I0717 02:00:59.054824   71146 start.go:255] writing updated cluster config ...
	I0717 02:00:59.055069   71146 ssh_runner.go:195] Run: rm -f paused
	I0717 02:00:59.101243   71146 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 02:00:59.103341   71146 out.go:177] * Done! kubectl is now configured to use "embed-certs-940222" cluster and "default" namespace by default
	I0717 02:00:56.432853   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:58.433589   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:00.932978   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:02.933289   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:05.433003   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:07.433470   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:09.433795   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:11.933112   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:14.433274   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:16.932102   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:18.932904   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:20.933023   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:23.433153   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:24.926132   71603 pod_ready.go:81] duration metric: took 4m0.000155151s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	E0717 02:01:24.926165   71603 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 02:01:24.926185   71603 pod_ready.go:38] duration metric: took 4m39.916322674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:01:24.926214   71603 kubeadm.go:597] duration metric: took 5m27.432375382s to restartPrimaryControlPlane
	W0717 02:01:24.926303   71603 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:01:24.926339   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:01:51.790820   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.86445583s)
	I0717 02:01:51.790901   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:01:51.812968   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:01:51.835689   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:01:51.848832   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:01:51.848859   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 02:01:51.848911   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:01:51.876554   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:01:51.876620   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:01:51.891580   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:01:51.901279   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:01:51.901328   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:01:51.910994   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.920959   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:01:51.921020   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.931039   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:01:51.940496   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:01:51.940549   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:01:51.950455   71603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:01:51.999712   71603 kubeadm.go:310] W0717 02:01:51.966911    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.000573   71603 kubeadm.go:310] W0717 02:01:51.967749    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.132406   71603 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:02:01.065590   71603 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 02:02:01.065670   71603 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:01.065761   71603 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:01.065909   71603 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:01.066049   71603 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 02:02:01.066124   71603 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:01.067867   71603 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:01.067966   71603 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:01.068043   71603 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:01.068139   71603 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:01.068210   71603 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:01.068310   71603 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:01.068391   71603 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:01.068471   71603 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:01.068523   71603 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:01.068585   71603 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:01.068650   71603 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:01.068683   71603 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:01.068752   71603 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:01.068822   71603 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:01.068906   71603 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 02:02:01.068970   71603 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:01.069057   71603 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:01.069157   71603 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:01.069271   71603 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:01.069369   71603 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:01.070772   71603 out.go:204]   - Booting up control plane ...
	I0717 02:02:01.070883   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:01.070981   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:01.071088   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:01.071206   71603 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:01.071311   71603 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:01.071365   71603 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:01.071497   71603 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 02:02:01.071557   71603 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 02:02:01.071608   71603 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044041ms
	I0717 02:02:01.071663   71603 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 02:02:01.071725   71603 kubeadm.go:310] [api-check] The API server is healthy after 5.501034024s
	I0717 02:02:01.071823   71603 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 02:02:01.071926   71603 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 02:02:01.071975   71603 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 02:02:01.072168   71603 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-391501 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 02:02:01.072238   71603 kubeadm.go:310] [bootstrap-token] Using token: jhnlja.0tmcz1ce1lkie6op
	I0717 02:02:01.073965   71603 out.go:204]   - Configuring RBAC rules ...
	I0717 02:02:01.074091   71603 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 02:02:01.074223   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 02:02:01.074390   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 02:02:01.074597   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 02:02:01.074766   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 02:02:01.074887   71603 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 02:02:01.075058   71603 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 02:02:01.075126   71603 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 02:02:01.075195   71603 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 02:02:01.075204   71603 kubeadm.go:310] 
	I0717 02:02:01.075255   71603 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 02:02:01.075262   71603 kubeadm.go:310] 
	I0717 02:02:01.075372   71603 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 02:02:01.075386   71603 kubeadm.go:310] 
	I0717 02:02:01.075419   71603 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 02:02:01.075498   71603 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 02:02:01.075582   71603 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 02:02:01.075604   71603 kubeadm.go:310] 
	I0717 02:02:01.075687   71603 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 02:02:01.075697   71603 kubeadm.go:310] 
	I0717 02:02:01.075759   71603 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 02:02:01.075771   71603 kubeadm.go:310] 
	I0717 02:02:01.075834   71603 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 02:02:01.075936   71603 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 02:02:01.076034   71603 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 02:02:01.076043   71603 kubeadm.go:310] 
	I0717 02:02:01.076142   71603 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 02:02:01.076248   71603 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 02:02:01.076256   71603 kubeadm.go:310] 
	I0717 02:02:01.076379   71603 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076541   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 \
	I0717 02:02:01.076582   71603 kubeadm.go:310] 	--control-plane 
	I0717 02:02:01.076600   71603 kubeadm.go:310] 
	I0717 02:02:01.076708   71603 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 02:02:01.076719   71603 kubeadm.go:310] 
	I0717 02:02:01.076819   71603 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076955   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 
	I0717 02:02:01.076972   71603 cni.go:84] Creating CNI manager for ""
	I0717 02:02:01.076981   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 02:02:01.078801   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 02:02:01.080151   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 02:02:01.093210   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 02:02:01.116656   71603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 02:02:01.116712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.116756   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-391501 minikube.k8s.io/updated_at=2024_07_17T02_02_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=no-preload-391501 minikube.k8s.io/primary=true
	I0717 02:02:01.314407   71603 ops.go:34] apiserver oom_adj: -16
	I0717 02:02:01.314467   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.814693   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.315439   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.814676   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.314734   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.814702   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.315450   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.815112   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.315144   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.814712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.921356   71603 kubeadm.go:1113] duration metric: took 4.80469441s to wait for elevateKubeSystemPrivileges
	I0717 02:02:05.921398   71603 kubeadm.go:394] duration metric: took 6m8.48278775s to StartCluster
	I0717 02:02:05.921420   71603 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.921508   71603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 02:02:05.923844   71603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.924156   71603 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 02:02:05.924254   71603 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 02:02:05.924328   71603 addons.go:69] Setting storage-provisioner=true in profile "no-preload-391501"
	I0717 02:02:05.924357   71603 addons.go:234] Setting addon storage-provisioner=true in "no-preload-391501"
	I0717 02:02:05.924355   71603 addons.go:69] Setting default-storageclass=true in profile "no-preload-391501"
	I0717 02:02:05.924364   71603 addons.go:69] Setting metrics-server=true in profile "no-preload-391501"
	I0717 02:02:05.924391   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:02:05.924398   71603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-391501"
	I0717 02:02:05.924404   71603 addons.go:234] Setting addon metrics-server=true in "no-preload-391501"
	W0717 02:02:05.924414   71603 addons.go:243] addon metrics-server should already be in state true
	W0717 02:02:05.924368   71603 addons.go:243] addon storage-provisioner should already be in state true
	I0717 02:02:05.924447   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924460   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924801   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924827   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924834   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924850   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924874   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924912   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.926034   71603 out.go:177] * Verifying Kubernetes components...
	I0717 02:02:05.927316   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:02:05.941502   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0717 02:02:05.941716   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0717 02:02:05.941969   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942299   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942492   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942509   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942873   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942902   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942933   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943175   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.943250   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943555   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0717 02:02:05.943829   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.943862   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.943996   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.944648   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.944672   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.945037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.945577   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.945613   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.947058   71603 addons.go:234] Setting addon default-storageclass=true in "no-preload-391501"
	W0717 02:02:05.947076   71603 addons.go:243] addon default-storageclass should already be in state true
	I0717 02:02:05.947103   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.947419   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.947447   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.960183   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0717 02:02:05.960662   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.961220   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.961249   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.961532   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.961777   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.962531   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0717 02:02:05.963063   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.964115   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.964120   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.964146   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.965195   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.965777   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0717 02:02:05.965802   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.965845   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.966114   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.966615   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.966635   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.966706   71603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 02:02:05.967037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.967228   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.968069   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 02:02:05.968101   71603 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 02:02:05.968121   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.969421   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.971055   71603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 02:02:05.972019   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.972494   71603 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:05.972515   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 02:02:05.972533   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.972622   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.972646   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.973122   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.973289   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.973415   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.973638   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.975702   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976091   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.976110   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976376   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.976553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.976717   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.976866   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.983061   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44967
	I0717 02:02:05.983397   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.983851   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.983867   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.984150   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.984319   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.985757   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.985973   71603 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:05.985985   71603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 02:02:05.986000   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.989238   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989627   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.989647   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989890   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.990056   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.990212   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.990412   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:06.238449   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 02:02:06.272217   71603 node_ready.go:35] waiting up to 6m0s for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281012   71603 node_ready.go:49] node "no-preload-391501" has status "Ready":"True"
	I0717 02:02:06.281031   71603 node_ready.go:38] duration metric: took 8.787329ms for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281040   71603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:06.297250   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:06.386971   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 02:02:06.386995   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 02:02:06.439822   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:06.460362   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 02:02:06.460391   71603 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 02:02:06.468640   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:06.551454   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:06.551482   71603 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 02:02:06.727518   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:07.338701   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338778   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.338874   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338900   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339119   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339217   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339230   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339273   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339291   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339301   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339314   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339240   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339386   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339575   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339592   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339648   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339711   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339736   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.357948   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.357966   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.358197   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.358212   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.694612   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.694690   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695028   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.695109   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695122   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695148   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.695160   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695404   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695421   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695432   71603 addons.go:475] Verifying addon metrics-server=true in "no-preload-391501"
	I0717 02:02:07.698298   71603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 02:02:08.622411   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:02:08.622531   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:02:08.624111   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:08.624168   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:08.624265   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:08.624391   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:08.624526   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:08.624604   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:08.626394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:08.626478   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:08.626574   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:08.626657   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:08.626735   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:08.626830   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:08.626909   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:08.627001   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:08.627095   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:08.627203   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:08.627325   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:08.627392   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:08.627469   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:08.627573   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:08.627663   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:08.627753   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:08.627836   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:08.627997   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:08.628107   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:08.628179   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:08.628272   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:08.630262   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:08.630372   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:08.630477   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:08.630594   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:08.630729   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:08.630960   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:08.631020   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:08.631099   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631293   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631394   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631648   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631748   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631925   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632050   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632253   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632327   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632528   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632546   71929 kubeadm.go:310] 
	I0717 02:02:08.632611   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:02:08.632671   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:02:08.632689   71929 kubeadm.go:310] 
	I0717 02:02:08.632729   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:02:08.632772   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:02:08.632902   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:02:08.632914   71929 kubeadm.go:310] 
	I0717 02:02:08.633001   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:02:08.633030   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:02:08.633075   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:02:08.633092   71929 kubeadm.go:310] 
	I0717 02:02:08.633204   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:02:08.633281   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:02:08.633306   71929 kubeadm.go:310] 
	I0717 02:02:08.633450   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:02:08.633535   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:02:08.633597   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:02:08.633668   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:02:08.633697   71929 kubeadm.go:310] 
	W0717 02:02:08.633780   71929 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 02:02:08.633821   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:02:09.101394   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:09.119918   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:02:09.130974   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:02:09.131002   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:02:09.131046   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:02:09.142720   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:02:09.142790   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:02:09.154990   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:02:09.166317   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:02:09.166379   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:02:09.176756   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.186639   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:02:09.186697   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.196778   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:02:09.206420   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:02:09.206469   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:02:09.216325   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:02:09.293311   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:09.293457   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:09.442386   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:09.442594   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:09.442736   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:09.618387   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:07.699645   71603 addons.go:510] duration metric: took 1.775390854s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 02:02:08.305410   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:09.620394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:09.620496   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:09.620593   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:09.620691   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:09.620791   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:09.620909   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:09.621004   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:09.621117   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:09.621364   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:09.621778   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:09.622072   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:09.622135   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:09.622225   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:09.990964   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:10.434990   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:10.579785   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:10.723319   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:10.746923   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:10.748370   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:10.748460   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:10.888855   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:10.890727   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:10.890860   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:10.893530   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:10.894934   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:10.896825   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:10.899127   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:10.806868   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:12.804727   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.804754   71603 pod_ready.go:81] duration metric: took 6.507471417s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.804763   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812383   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.812408   71603 pod_ready.go:81] duration metric: took 7.638012ms for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812420   71603 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320241   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.320263   71603 pod_ready.go:81] duration metric: took 507.836128ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320285   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326308   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.326332   71603 pod_ready.go:81] duration metric: took 6.041207ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326341   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331310   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.331338   71603 pod_ready.go:81] duration metric: took 4.988207ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331360   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602634   71603 pod_ready.go:92] pod "kube-proxy-gl7th" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.602677   71603 pod_ready.go:81] duration metric: took 271.310877ms for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602687   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002256   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:14.002282   71603 pod_ready.go:81] duration metric: took 399.588324ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002290   71603 pod_ready.go:38] duration metric: took 7.721240931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:14.002306   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:02:14.002355   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:02:14.016981   71603 api_server.go:72] duration metric: took 8.092789001s to wait for apiserver process to appear ...
	I0717 02:02:14.017007   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:02:14.017026   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 02:02:14.022008   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 02:02:14.022992   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 02:02:14.023010   71603 api_server.go:131] duration metric: took 5.997297ms to wait for apiserver health ...
	I0717 02:02:14.023016   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:02:14.204777   71603 system_pods.go:59] 9 kube-system pods found
	I0717 02:02:14.204806   71603 system_pods.go:61] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.204811   71603 system_pods.go:61] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.204816   71603 system_pods.go:61] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.204819   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.204823   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.204826   71603 system_pods.go:61] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.204829   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.204836   71603 system_pods.go:61] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.204839   71603 system_pods.go:61] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.204847   71603 system_pods.go:74] duration metric: took 181.825073ms to wait for pod list to return data ...
	I0717 02:02:14.204854   71603 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:02:14.402964   71603 default_sa.go:45] found service account: "default"
	I0717 02:02:14.402992   71603 default_sa.go:55] duration metric: took 198.131224ms for default service account to be created ...
	I0717 02:02:14.403005   71603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:02:14.606371   71603 system_pods.go:86] 9 kube-system pods found
	I0717 02:02:14.606408   71603 system_pods.go:89] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.606418   71603 system_pods.go:89] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.606424   71603 system_pods.go:89] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.606430   71603 system_pods.go:89] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.606438   71603 system_pods.go:89] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.606444   71603 system_pods.go:89] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.606450   71603 system_pods.go:89] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.606461   71603 system_pods.go:89] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.606474   71603 system_pods.go:89] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.606486   71603 system_pods.go:126] duration metric: took 203.473728ms to wait for k8s-apps to be running ...
	I0717 02:02:14.606497   71603 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:02:14.606568   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:14.622178   71603 system_svc.go:56] duration metric: took 15.671962ms WaitForService to wait for kubelet
	I0717 02:02:14.622211   71603 kubeadm.go:582] duration metric: took 8.698021688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:02:14.622234   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:02:14.802282   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:02:14.802309   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 02:02:14.802319   71603 node_conditions.go:105] duration metric: took 180.080727ms to run NodePressure ...
	I0717 02:02:14.802330   71603 start.go:241] waiting for startup goroutines ...
	I0717 02:02:14.802337   71603 start.go:246] waiting for cluster config update ...
	I0717 02:02:14.802345   71603 start.go:255] writing updated cluster config ...
	I0717 02:02:14.802613   71603 ssh_runner.go:195] Run: rm -f paused
	I0717 02:02:14.848725   71603 start.go:600] kubectl: 1.30.2, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 02:02:14.850965   71603 out.go:177] * Done! kubectl is now configured to use "no-preload-391501" cluster and "default" namespace by default
	I0717 02:02:50.900829   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:50.901350   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:50.901626   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:55.902558   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:55.902805   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:05.903753   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:05.904033   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:25.905383   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:25.905597   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906576   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:04:05.906960   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906992   71929 kubeadm.go:310] 
	I0717 02:04:05.907049   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:04:05.907133   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:04:05.907182   71929 kubeadm.go:310] 
	I0717 02:04:05.907252   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:04:05.907339   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:04:05.907516   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:04:05.907529   71929 kubeadm.go:310] 
	I0717 02:04:05.907661   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:04:05.907699   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:04:05.907743   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:04:05.907751   71929 kubeadm.go:310] 
	I0717 02:04:05.907907   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:04:05.908043   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:04:05.908053   71929 kubeadm.go:310] 
	I0717 02:04:05.908221   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:04:05.908435   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:04:05.908619   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:04:05.908738   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:04:05.908788   71929 kubeadm.go:310] 
	I0717 02:04:05.909079   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:04:05.909286   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:04:05.909452   71929 kubeadm.go:394] duration metric: took 7m58.01930975s to StartCluster
	I0717 02:04:05.909455   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:04:05.909494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:04:05.909552   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:04:05.952911   71929 cri.go:89] found id: ""
	I0717 02:04:05.952937   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.952949   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:04:05.952957   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:04:05.953026   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:04:05.988490   71929 cri.go:89] found id: ""
	I0717 02:04:05.988518   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.988529   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:04:05.988537   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:04:05.988593   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:04:06.025228   71929 cri.go:89] found id: ""
	I0717 02:04:06.025259   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.025269   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:04:06.025277   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:04:06.025342   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:04:06.060563   71929 cri.go:89] found id: ""
	I0717 02:04:06.060589   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.060599   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:04:06.060604   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:04:06.060660   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:04:06.095051   71929 cri.go:89] found id: ""
	I0717 02:04:06.095079   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.095091   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:04:06.095099   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:04:06.095150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:04:06.131892   71929 cri.go:89] found id: ""
	I0717 02:04:06.131914   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.131921   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:04:06.131927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:04:06.131973   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:04:06.168893   71929 cri.go:89] found id: ""
	I0717 02:04:06.168919   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.168930   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:04:06.168937   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:04:06.168995   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:04:06.206635   71929 cri.go:89] found id: ""
	I0717 02:04:06.206658   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.206668   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:04:06.206679   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:04:06.206693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:04:06.308601   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:04:06.308624   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:04:06.308637   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:04:06.422081   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:04:06.422116   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:04:06.467466   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:04:06.467496   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:04:06.521420   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:04:06.521457   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0717 02:04:06.535167   71929 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 02:04:06.535211   71929 out.go:239] * 
	W0717 02:04:06.535263   71929 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.535292   71929 out.go:239] * 
	W0717 02:04:06.536098   71929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:04:06.539314   71929 out.go:177] 
	W0717 02:04:06.540504   71929 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.540557   71929 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 02:04:06.540579   71929 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 02:04:06.541888   71929 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.828924471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182276828879528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bf89c41-d052-4810-a607-5ec6981dde7c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.829931530Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcbef697-3f18-4dd1-ab0a-125d58b3d8d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.829985029Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcbef697-3f18-4dd1-ab0a-125d58b3d8d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.830213908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f18a686c2ded98388af30bee85e8a5f3b6e2446fa37496a4ceea8949072836d,PodSandboxId:3fc741fec0fdb0783cb37a8a0ff71e22d28bd7663b3ea094a37f2e22ce431883,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181727823644618,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742baa9b-d48e-4be9-8c33-64d42e1ff169,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ee513a721cdef30ecd3f51ebea2df9235862fb847cb891640cefb4ac6edecf,PodSandboxId:53336dc0c8a6326312f116c4f3ce9c3647c56a2eb68f3df197215a01f1c6276a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726704726585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5lstd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b74210-7395-4a48-8e1b-b49fb2faea43,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f520e58db1d489bffd43f419e8e6e031d0057ee6a826d4ae6b04dda73b06cf37,PodSandboxId:ee2dbce30242fea4ef282127806402d3f905498ff314804f112782a237637258,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726590376686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tn5jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48
2276d3-bfe2-4538-9dfe-a2a87a02182c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc3b9c490ff36d5b586cf0c5325e00bab05e22fb8939f2a8e55014fe5d917af,PodSandboxId:8c940c103c9a8fc9a970c9dda621791325ab427c7960e38b256fcde03691b213,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721181725861846880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gl7th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320d9fae-f5b8-47bd-afc0-88e07e23157a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24516073158b7e84c967e58397bde021ef567c50d65b73a8726840a54140aa7,PodSandboxId:df3f228f44f4011e0be5e39f35c27aed6c9bfd18f41972a7c2fa95e43150cd8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721181714781824969,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 344acf7ebdaa0c036f41562765095ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7528a2702168862e7a35193d04849f0162f3a3584a0093be8e9062c8f4cfd736,PodSandboxId:f2f2dc234d47b3355cbddf229124d5891d347274c93c501ba7e9e11cb42d2a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721181714741409744,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e64be7ee0f24a232f3758d919f454e09,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc815ffb334b863e88dba396a6e7448f3629bcb9fcfaa8f4be8928dc60d1ac0,PodSandboxId:0be2cbf7dc74177df671548dde0e9aae7d85f208977985af47c9902547f1fec9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721181714672241645,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618d36b0a982d3b72b5fae6920d7ecb87cf6f321d739d7ca78c42dd4a4807c8c,PodSandboxId:d12d41c04023ba058550ea77bfb064eb0526233f4480de672922d23db0209356,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721181714628410717,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3c9ab6639b2fe2032b78777103debd4,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59e9bc3378bf74596204bfe5c7bd232684b64be74f90b5dbae477205a2b4dea,PodSandboxId:6d62d4047e73fe1c1afd33d64a404d919c4c1d824c8fb46a7ab879ce3186483c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721181381945218515,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcbef697-3f18-4dd1-ab0a-125d58b3d8d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.866789066Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5dedb75-140e-4ace-9722-dc4abbe32840 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.866862498Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5dedb75-140e-4ace-9722-dc4abbe32840 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.868321037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57f2a39f-51b7-4a27-a282-ae5d7e891178 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.868786820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182276868759823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57f2a39f-51b7-4a27-a282-ae5d7e891178 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.869356417Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abdccf80-d7b4-4e19-9a41-ea668659e21e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.869416891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abdccf80-d7b4-4e19-9a41-ea668659e21e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.869690671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f18a686c2ded98388af30bee85e8a5f3b6e2446fa37496a4ceea8949072836d,PodSandboxId:3fc741fec0fdb0783cb37a8a0ff71e22d28bd7663b3ea094a37f2e22ce431883,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181727823644618,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742baa9b-d48e-4be9-8c33-64d42e1ff169,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ee513a721cdef30ecd3f51ebea2df9235862fb847cb891640cefb4ac6edecf,PodSandboxId:53336dc0c8a6326312f116c4f3ce9c3647c56a2eb68f3df197215a01f1c6276a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726704726585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5lstd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b74210-7395-4a48-8e1b-b49fb2faea43,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f520e58db1d489bffd43f419e8e6e031d0057ee6a826d4ae6b04dda73b06cf37,PodSandboxId:ee2dbce30242fea4ef282127806402d3f905498ff314804f112782a237637258,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726590376686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tn5jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48
2276d3-bfe2-4538-9dfe-a2a87a02182c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc3b9c490ff36d5b586cf0c5325e00bab05e22fb8939f2a8e55014fe5d917af,PodSandboxId:8c940c103c9a8fc9a970c9dda621791325ab427c7960e38b256fcde03691b213,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721181725861846880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gl7th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320d9fae-f5b8-47bd-afc0-88e07e23157a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24516073158b7e84c967e58397bde021ef567c50d65b73a8726840a54140aa7,PodSandboxId:df3f228f44f4011e0be5e39f35c27aed6c9bfd18f41972a7c2fa95e43150cd8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721181714781824969,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 344acf7ebdaa0c036f41562765095ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7528a2702168862e7a35193d04849f0162f3a3584a0093be8e9062c8f4cfd736,PodSandboxId:f2f2dc234d47b3355cbddf229124d5891d347274c93c501ba7e9e11cb42d2a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721181714741409744,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e64be7ee0f24a232f3758d919f454e09,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc815ffb334b863e88dba396a6e7448f3629bcb9fcfaa8f4be8928dc60d1ac0,PodSandboxId:0be2cbf7dc74177df671548dde0e9aae7d85f208977985af47c9902547f1fec9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721181714672241645,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618d36b0a982d3b72b5fae6920d7ecb87cf6f321d739d7ca78c42dd4a4807c8c,PodSandboxId:d12d41c04023ba058550ea77bfb064eb0526233f4480de672922d23db0209356,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721181714628410717,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3c9ab6639b2fe2032b78777103debd4,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59e9bc3378bf74596204bfe5c7bd232684b64be74f90b5dbae477205a2b4dea,PodSandboxId:6d62d4047e73fe1c1afd33d64a404d919c4c1d824c8fb46a7ab879ce3186483c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721181381945218515,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abdccf80-d7b4-4e19-9a41-ea668659e21e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.924332752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63d912ec-ca84-4418-9c19-7a2306340887 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.924440041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63d912ec-ca84-4418-9c19-7a2306340887 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.926313187Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b9c1672-b393-4c4e-a315-0ea54613c5ea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.926825441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182276926802343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b9c1672-b393-4c4e-a315-0ea54613c5ea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.927306697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db70c4c8-123f-4034-9d7e-564f04258363 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.927367114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db70c4c8-123f-4034-9d7e-564f04258363 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.928161414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f18a686c2ded98388af30bee85e8a5f3b6e2446fa37496a4ceea8949072836d,PodSandboxId:3fc741fec0fdb0783cb37a8a0ff71e22d28bd7663b3ea094a37f2e22ce431883,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181727823644618,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742baa9b-d48e-4be9-8c33-64d42e1ff169,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ee513a721cdef30ecd3f51ebea2df9235862fb847cb891640cefb4ac6edecf,PodSandboxId:53336dc0c8a6326312f116c4f3ce9c3647c56a2eb68f3df197215a01f1c6276a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726704726585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5lstd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b74210-7395-4a48-8e1b-b49fb2faea43,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f520e58db1d489bffd43f419e8e6e031d0057ee6a826d4ae6b04dda73b06cf37,PodSandboxId:ee2dbce30242fea4ef282127806402d3f905498ff314804f112782a237637258,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726590376686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tn5jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48
2276d3-bfe2-4538-9dfe-a2a87a02182c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc3b9c490ff36d5b586cf0c5325e00bab05e22fb8939f2a8e55014fe5d917af,PodSandboxId:8c940c103c9a8fc9a970c9dda621791325ab427c7960e38b256fcde03691b213,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721181725861846880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gl7th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320d9fae-f5b8-47bd-afc0-88e07e23157a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24516073158b7e84c967e58397bde021ef567c50d65b73a8726840a54140aa7,PodSandboxId:df3f228f44f4011e0be5e39f35c27aed6c9bfd18f41972a7c2fa95e43150cd8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721181714781824969,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 344acf7ebdaa0c036f41562765095ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7528a2702168862e7a35193d04849f0162f3a3584a0093be8e9062c8f4cfd736,PodSandboxId:f2f2dc234d47b3355cbddf229124d5891d347274c93c501ba7e9e11cb42d2a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721181714741409744,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e64be7ee0f24a232f3758d919f454e09,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc815ffb334b863e88dba396a6e7448f3629bcb9fcfaa8f4be8928dc60d1ac0,PodSandboxId:0be2cbf7dc74177df671548dde0e9aae7d85f208977985af47c9902547f1fec9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721181714672241645,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618d36b0a982d3b72b5fae6920d7ecb87cf6f321d739d7ca78c42dd4a4807c8c,PodSandboxId:d12d41c04023ba058550ea77bfb064eb0526233f4480de672922d23db0209356,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721181714628410717,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3c9ab6639b2fe2032b78777103debd4,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59e9bc3378bf74596204bfe5c7bd232684b64be74f90b5dbae477205a2b4dea,PodSandboxId:6d62d4047e73fe1c1afd33d64a404d919c4c1d824c8fb46a7ab879ce3186483c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721181381945218515,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db70c4c8-123f-4034-9d7e-564f04258363 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.967937821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d3141c7-7cfa-49b7-b0db-fa0fae0e37f0 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.968007447Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d3141c7-7cfa-49b7-b0db-fa0fae0e37f0 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.969167045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64bb8f6b-4a42-484b-b51b-972e8c05c6d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.969508809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182276969487184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64bb8f6b-4a42-484b-b51b-972e8c05c6d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.970174860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a250d6a-c470-4f35-a486-3570873b1a6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.970223877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a250d6a-c470-4f35-a486-3570873b1a6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:11:16 no-preload-391501 crio[717]: time="2024-07-17 02:11:16.970430908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f18a686c2ded98388af30bee85e8a5f3b6e2446fa37496a4ceea8949072836d,PodSandboxId:3fc741fec0fdb0783cb37a8a0ff71e22d28bd7663b3ea094a37f2e22ce431883,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181727823644618,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742baa9b-d48e-4be9-8c33-64d42e1ff169,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ee513a721cdef30ecd3f51ebea2df9235862fb847cb891640cefb4ac6edecf,PodSandboxId:53336dc0c8a6326312f116c4f3ce9c3647c56a2eb68f3df197215a01f1c6276a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726704726585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5lstd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b74210-7395-4a48-8e1b-b49fb2faea43,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f520e58db1d489bffd43f419e8e6e031d0057ee6a826d4ae6b04dda73b06cf37,PodSandboxId:ee2dbce30242fea4ef282127806402d3f905498ff314804f112782a237637258,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726590376686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tn5jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48
2276d3-bfe2-4538-9dfe-a2a87a02182c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc3b9c490ff36d5b586cf0c5325e00bab05e22fb8939f2a8e55014fe5d917af,PodSandboxId:8c940c103c9a8fc9a970c9dda621791325ab427c7960e38b256fcde03691b213,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721181725861846880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gl7th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320d9fae-f5b8-47bd-afc0-88e07e23157a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24516073158b7e84c967e58397bde021ef567c50d65b73a8726840a54140aa7,PodSandboxId:df3f228f44f4011e0be5e39f35c27aed6c9bfd18f41972a7c2fa95e43150cd8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721181714781824969,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 344acf7ebdaa0c036f41562765095ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7528a2702168862e7a35193d04849f0162f3a3584a0093be8e9062c8f4cfd736,PodSandboxId:f2f2dc234d47b3355cbddf229124d5891d347274c93c501ba7e9e11cb42d2a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721181714741409744,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e64be7ee0f24a232f3758d919f454e09,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc815ffb334b863e88dba396a6e7448f3629bcb9fcfaa8f4be8928dc60d1ac0,PodSandboxId:0be2cbf7dc74177df671548dde0e9aae7d85f208977985af47c9902547f1fec9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721181714672241645,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618d36b0a982d3b72b5fae6920d7ecb87cf6f321d739d7ca78c42dd4a4807c8c,PodSandboxId:d12d41c04023ba058550ea77bfb064eb0526233f4480de672922d23db0209356,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721181714628410717,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3c9ab6639b2fe2032b78777103debd4,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59e9bc3378bf74596204bfe5c7bd232684b64be74f90b5dbae477205a2b4dea,PodSandboxId:6d62d4047e73fe1c1afd33d64a404d919c4c1d824c8fb46a7ab879ce3186483c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721181381945218515,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a250d6a-c470-4f35-a486-3570873b1a6a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0f18a686c2ded       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3fc741fec0fdb       storage-provisioner
	86ee513a721cd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   53336dc0c8a63       coredns-5cfdc65f69-5lstd
	f520e58db1d48       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   ee2dbce30242f       coredns-5cfdc65f69-tn5jv
	5dc3b9c490ff3       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   8c940c103c9a8       kube-proxy-gl7th
	d24516073158b       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   df3f228f44f40       kube-scheduler-no-preload-391501
	7528a27021688       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   f2f2dc234d47b       etcd-no-preload-391501
	4bc815ffb334b       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            3                   0be2cbf7dc741       kube-apiserver-no-preload-391501
	618d36b0a982d       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   3                   d12d41c04023b       kube-controller-manager-no-preload-391501
	d59e9bc3378bf       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            2                   6d62d4047e73f       kube-apiserver-no-preload-391501
	
	
	==> coredns [86ee513a721cdef30ecd3f51ebea2df9235862fb847cb891640cefb4ac6edecf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f520e58db1d489bffd43f419e8e6e031d0057ee6a826d4ae6b04dda73b06cf37] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-391501
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-391501
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=no-preload-391501
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T02_02_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 02:01:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-391501
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:11:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:07:16 +0000   Wed, 17 Jul 2024 02:01:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:07:16 +0000   Wed, 17 Jul 2024 02:01:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:07:16 +0000   Wed, 17 Jul 2024 02:01:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:07:16 +0000   Wed, 17 Jul 2024 02:01:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.174
	  Hostname:    no-preload-391501
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 09e77312cf804798aea80962cc815545
	  System UUID:                09e77312-cf80-4798-aea8-0962cc815545
	  Boot ID:                    78d77276-8a10-44e3-ab68-d9595b634af9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-5lstd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 coredns-5cfdc65f69-tn5jv                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 etcd-no-preload-391501                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-no-preload-391501             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-no-preload-391501    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-gl7th                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                 kube-scheduler-no-preload-391501             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-78fcd8795b-tnrht              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node no-preload-391501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node no-preload-391501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node no-preload-391501 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node no-preload-391501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node no-preload-391501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node no-preload-391501 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s                  node-controller  Node no-preload-391501 event: Registered Node no-preload-391501 in Controller
	
	
	==> dmesg <==
	[  +0.040220] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662883] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.271615] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.583388] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.419636] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.054949] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060604] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.186776] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.159208] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.311702] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[ +15.427560] systemd-fstab-generator[1170]: Ignoring "noauto" option for root device
	[  +0.065053] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.585070] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[Jul17 01:56] kauditd_printk_skb: 90 callbacks suppressed
	[ +26.359295] kauditd_printk_skb: 85 callbacks suppressed
	[Jul17 02:01] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.831185] systemd-fstab-generator[3061]: Ignoring "noauto" option for root device
	[  +0.064067] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.484799] systemd-fstab-generator[3391]: Ignoring "noauto" option for root device
	[  +0.097050] kauditd_printk_skb: 55 callbacks suppressed
	[Jul17 02:02] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.265336] systemd-fstab-generator[3593]: Ignoring "noauto" option for root device
	[  +6.592664] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [7528a2702168862e7a35193d04849f0162f3a3584a0093be8e9062c8f4cfd736] <==
	{"level":"info","ts":"2024-07-17T02:01:55.110973Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T02:01:55.114121Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T02:01:55.114158Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T02:01:55.11539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 switched to configuration voters=(3279157608688714916)"}
	{"level":"info","ts":"2024-07-17T02:01:55.11671Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"98a332d8ef0073ef","local-member-id":"2d81e878ac6904a4","added-peer-id":"2d81e878ac6904a4","added-peer-peer-urls":["https://192.168.61.174:2380"]}
	{"level":"info","ts":"2024-07-17T02:01:56.050642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T02:01:56.050696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T02:01:56.050713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 received MsgPreVoteResp from 2d81e878ac6904a4 at term 1"}
	{"level":"info","ts":"2024-07-17T02:01:56.050726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T02:01:56.050732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 received MsgVoteResp from 2d81e878ac6904a4 at term 2"}
	{"level":"info","ts":"2024-07-17T02:01:56.050743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T02:01:56.05075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2d81e878ac6904a4 elected leader 2d81e878ac6904a4 at term 2"}
	{"level":"info","ts":"2024-07-17T02:01:56.053088Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2d81e878ac6904a4","local-member-attributes":"{Name:no-preload-391501 ClientURLs:[https://192.168.61.174:2379]}","request-path":"/0/members/2d81e878ac6904a4/attributes","cluster-id":"98a332d8ef0073ef","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T02:01:56.053367Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T02:01:56.053654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T02:01:56.054047Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T02:01:56.055945Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T02:01:56.056585Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T02:01:56.056621Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T02:01:56.057159Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T02:01:56.057691Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.174:2379"}
	{"level":"info","ts":"2024-07-17T02:01:56.057894Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T02:01:56.058513Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"98a332d8ef0073ef","local-member-id":"2d81e878ac6904a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T02:01:56.058704Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T02:01:56.058767Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 02:11:17 up 15 min,  0 users,  load average: 0.47, 0.23, 0.15
	Linux no-preload-391501 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4bc815ffb334b863e88dba396a6e7448f3629bcb9fcfaa8f4be8928dc60d1ac0] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0717 02:06:58.606467       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 02:06:58.606626       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0717 02:06:58.607604       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 02:06:58.608771       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:07:58.608085       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 02:07:58.608190       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 02:07:58.609347       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:07:58.609465       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 02:07:58.609501       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0717 02:07:58.610643       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:09:58.609510       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 02:09:58.609750       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 02:09:58.611279       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:09:58.611620       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 02:09:58.611734       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0717 02:09:58.613015       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d59e9bc3378bf74596204bfe5c7bd232684b64be74f90b5dbae477205a2b4dea] <==
	W0717 02:01:48.807834       1 logging.go:55] [core] [Channel #39 SubChannel #40]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.816361       1 logging.go:55] [core] [Channel #63 SubChannel #64]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.852056       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.899658       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.904981       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.927091       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.953825       1 logging.go:55] [core] [Channel #60 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.083948       1 logging.go:55] [core] [Channel #57 SubChannel #58]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.101102       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.147039       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.150378       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.159982       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.393949       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.404492       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.446090       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.537252       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.584864       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.621137       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.047346       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.098914       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.112837       1 logging.go:55] [core] [Channel #24 SubChannel #25]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.162222       1 logging.go:55] [core] [Channel #36 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.228851       1 logging.go:55] [core] [Channel #21 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.334419       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.592200       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [618d36b0a982d3b72b5fae6920d7ecb87cf6f321d739d7ca78c42dd4a4807c8c] <==
	E0717 02:06:05.435643       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:06:05.476935       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:06:35.442104       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:06:35.484206       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:07:05.451210       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:07:05.496155       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:07:16.775604       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-391501"
	E0717 02:07:35.458170       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:07:35.504852       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:08:02.390794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="300.844µs"
	E0717 02:08:05.464476       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:08:05.517916       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:08:15.388246       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="114.184µs"
	E0717 02:08:35.471936       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:08:35.525788       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:09:05.480806       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:09:05.533796       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:09:35.489496       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:09:35.547619       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:10:05.496398       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:10:05.556783       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:10:35.502683       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:10:35.565192       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:11:05.512165       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:11:05.573253       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5dc3b9c490ff36d5b586cf0c5325e00bab05e22fb8939f2a8e55014fe5d917af] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 02:02:06.508259       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 02:02:06.534276       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.174"]
	E0717 02:02:06.534379       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 02:02:06.626308       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 02:02:06.626382       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 02:02:06.626427       1 server_linux.go:170] "Using iptables Proxier"
	I0717 02:02:06.631457       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 02:02:06.631960       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 02:02:06.631999       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 02:02:06.639252       1 config.go:197] "Starting service config controller"
	I0717 02:02:06.639301       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 02:02:06.639344       1 config.go:104] "Starting endpoint slice config controller"
	I0717 02:02:06.639353       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 02:02:06.646610       1 config.go:326] "Starting node config controller"
	I0717 02:02:06.648378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 02:02:06.739657       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 02:02:06.739699       1 shared_informer.go:320] Caches are synced for service config
	I0717 02:02:06.758641       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d24516073158b7e84c967e58397bde021ef567c50d65b73a8726840a54140aa7] <==
	E0717 02:01:57.637279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0717 02:01:57.632394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:57.639717       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 02:01:57.639815       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0717 02:01:58.463224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 02:01:58.463274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.471101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 02:01:58.471299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.475216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 02:01:58.475392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.484419       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 02:01:58.484501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.607244       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 02:01:58.607376       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0717 02:01:58.667304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 02:01:58.667355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.757455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 02:01:58.757508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.803860       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 02:01:58.804064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.845739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 02:01:58.845796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.877090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 02:01:58.877144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0717 02:02:01.114723       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:09:00 no-preload-391501 kubelet[3397]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:09:00 no-preload-391501 kubelet[3397]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:09:00 no-preload-391501 kubelet[3397]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:09:00 no-preload-391501 kubelet[3397]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:09:05 no-preload-391501 kubelet[3397]: E0717 02:09:05.371819    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:09:17 no-preload-391501 kubelet[3397]: E0717 02:09:17.372292    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:09:32 no-preload-391501 kubelet[3397]: E0717 02:09:32.373916    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:09:45 no-preload-391501 kubelet[3397]: E0717 02:09:45.372273    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:10:00 no-preload-391501 kubelet[3397]: E0717 02:10:00.378849    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:10:00 no-preload-391501 kubelet[3397]: E0717 02:10:00.408810    3397 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:10:00 no-preload-391501 kubelet[3397]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:10:00 no-preload-391501 kubelet[3397]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:10:00 no-preload-391501 kubelet[3397]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:10:00 no-preload-391501 kubelet[3397]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:10:12 no-preload-391501 kubelet[3397]: E0717 02:10:12.372129    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:10:24 no-preload-391501 kubelet[3397]: E0717 02:10:24.372251    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:10:35 no-preload-391501 kubelet[3397]: E0717 02:10:35.371773    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:10:50 no-preload-391501 kubelet[3397]: E0717 02:10:50.372691    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:11:00 no-preload-391501 kubelet[3397]: E0717 02:11:00.399376    3397 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:11:00 no-preload-391501 kubelet[3397]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:11:00 no-preload-391501 kubelet[3397]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:11:00 no-preload-391501 kubelet[3397]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:11:00 no-preload-391501 kubelet[3397]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:11:01 no-preload-391501 kubelet[3397]: E0717 02:11:01.371725    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:11:15 no-preload-391501 kubelet[3397]: E0717 02:11:15.372441    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	
	
	==> storage-provisioner [0f18a686c2ded98388af30bee85e8a5f3b6e2446fa37496a4ceea8949072836d] <==
	I0717 02:02:07.928946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 02:02:07.938070       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 02:02:07.938123       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 02:02:07.958702       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 02:02:07.958867       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-391501_824b21b9-6595-4a43-8430-09b988a3df19!
	I0717 02:02:07.969325       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c12a90b4-fb97-4132-86c3-46a7bab25a56", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-391501_824b21b9-6595-4a43-8430-09b988a3df19 became leader
	I0717 02:02:08.059354       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-391501_824b21b9-6595-4a43-8430-09b988a3df19!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-391501 -n no-preload-391501
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-391501 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-tnrht
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-391501 describe pod metrics-server-78fcd8795b-tnrht
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-391501 describe pod metrics-server-78fcd8795b-tnrht: exit status 1 (98.897405ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-tnrht" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-391501 describe pod metrics-server-78fcd8795b-tnrht: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:04:22.091201   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:04:23.432335   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:04:39.021569   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:05:03.264543   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:05:14.901811   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:05:17.179465   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:06:02.067105   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:06:03.459460   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:06:26.308560   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:06:37.944639   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:06:58.312855   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:07:26.503834   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:07:58.379743   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 02:07:59.045745   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:08:00.389180   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:08:20.231875   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:08:21.356995   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:09:39.020968   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:10:14.901233   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:10:17.180207   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:11:03.459055   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:11:58.313131   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/bridge-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:12:58.380052   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 02:12:59.044870   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/auto-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:13:00.388519   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-901761 -n old-k8s-version-901761
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 2 (229.243582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-901761" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 2 (225.217575ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-901761 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-901761 logs -n 25: (1.567030408s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-894370 sudo cat                              | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo find                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo crio                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-894370                                       | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255698 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | disable-driver-mounts-255698                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:48 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-940222            | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-738184  | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-391501             | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-940222                 | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-901761        | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 02:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-738184       | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-391501                  | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:59 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-391501 --memory=2200                     | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 02:02 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901761             | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:51:47
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:51:47.395737   71929 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:51:47.396000   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396010   71929 out.go:304] Setting ErrFile to fd 2...
	I0717 01:51:47.396016   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396184   71929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:51:47.396684   71929 out.go:298] Setting JSON to false
	I0717 01:51:47.397549   71929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5649,"bootTime":1721175458,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:51:47.397606   71929 start.go:139] virtualization: kvm guest
	I0717 01:51:47.399758   71929 out.go:177] * [old-k8s-version-901761] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:51:47.400960   71929 notify.go:220] Checking for updates...
	I0717 01:51:47.400966   71929 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:51:47.402266   71929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:51:47.403356   71929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:51:47.404532   71929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:51:47.405524   71929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:51:47.406572   71929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:51:47.407935   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:51:47.408358   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.408427   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.422931   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0717 01:51:47.423315   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.423809   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.423831   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.424123   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.424259   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.426227   71929 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 01:51:47.427500   71929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:51:47.427770   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.427801   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.442080   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I0717 01:51:47.442438   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.442901   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.442924   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.443208   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.443382   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.476327   71929 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:51:47.477607   71929 start.go:297] selected driver: kvm2
	I0717 01:51:47.477620   71929 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.477762   71929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:51:47.478432   71929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.478541   71929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:51:47.493611   71929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:51:47.493967   71929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:51:47.494039   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:51:47.494056   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:51:47.494147   71929 start.go:340] cluster config:
	{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.494271   71929 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.496056   71929 out.go:177] * Starting "old-k8s-version-901761" primary control-plane node in "old-k8s-version-901761" cluster
	I0717 01:51:45.178864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:47.497229   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:51:47.497266   71929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:51:47.497279   71929 cache.go:56] Caching tarball of preloaded images
	I0717 01:51:47.497368   71929 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:51:47.497379   71929 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:51:47.497484   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:51:47.497671   71929 start.go:360] acquireMachinesLock for old-k8s-version-901761: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:51:51.258826   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:54.330879   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:00.410811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:03.482811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:09.562828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:12.634828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:18.714910   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:21.786892   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:27.866863   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:30.938805   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:37.022827   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:40.090853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:46.170839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:49.242854   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:55.322824   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:58.394792   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:04.474811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:07.546855   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:13.626861   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:16.698832   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:22.778828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:25.850864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:31.930814   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:35.002842   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:41.082839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:44.154796   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:50.234823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:53.306914   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:59.386835   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:02.458751   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:08.538853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:11.610833   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:17.690816   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:20.762793   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:26.842837   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:29.914866   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:35.994838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:39.066806   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:45.146846   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:48.218841   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:54.298823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:57.370838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:55:00.375050   71522 start.go:364] duration metric: took 3m54.700923144s to acquireMachinesLock for "default-k8s-diff-port-738184"
	I0717 01:55:00.375103   71522 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:00.375110   71522 fix.go:54] fixHost starting: 
	I0717 01:55:00.375500   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:00.375532   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:00.390583   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0717 01:55:00.390957   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:00.391392   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:00.391412   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:00.391704   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:00.391927   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:00.392069   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:00.393467   71522 fix.go:112] recreateIfNeeded on default-k8s-diff-port-738184: state=Stopped err=<nil>
	I0717 01:55:00.393508   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	W0717 01:55:00.393658   71522 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:00.395826   71522 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-738184" ...
	I0717 01:55:00.397256   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Start
	I0717 01:55:00.397401   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring networks are active...
	I0717 01:55:00.398079   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network default is active
	I0717 01:55:00.398390   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network mk-default-k8s-diff-port-738184 is active
	I0717 01:55:00.398710   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Getting domain xml...
	I0717 01:55:00.399275   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Creating domain...
	I0717 01:55:00.372573   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:00.372621   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.372933   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:55:00.372957   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.373131   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:55:00.374934   71146 machine.go:97] duration metric: took 4m37.428393808s to provisionDockerMachine
	I0717 01:55:00.374969   71146 fix.go:56] duration metric: took 4m37.449104762s for fixHost
	I0717 01:55:00.374974   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 4m37.449121677s
	W0717 01:55:00.374996   71146 start.go:714] error starting host: provision: host is not running
	W0717 01:55:00.375080   71146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 01:55:00.375088   71146 start.go:729] Will try again in 5 seconds ...
	I0717 01:55:01.590292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting to get IP...
	I0717 01:55:01.591187   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591589   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591657   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.591578   72583 retry.go:31] will retry after 266.165899ms: waiting for machine to come up
	I0717 01:55:01.859307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859724   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859751   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.859695   72583 retry.go:31] will retry after 282.941451ms: waiting for machine to come up
	I0717 01:55:02.144389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144756   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144787   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.144701   72583 retry.go:31] will retry after 327.203414ms: waiting for machine to come up
	I0717 01:55:02.473217   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473681   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473705   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.473606   72583 retry.go:31] will retry after 553.917043ms: waiting for machine to come up
	I0717 01:55:03.029379   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029762   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.029738   72583 retry.go:31] will retry after 617.312209ms: waiting for machine to come up
	I0717 01:55:03.648372   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648701   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648733   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.648670   72583 retry.go:31] will retry after 641.28503ms: waiting for machine to come up
	I0717 01:55:04.291493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.291986   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.292019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:04.291870   72583 retry.go:31] will retry after 1.133455116s: waiting for machine to come up
	I0717 01:55:05.426672   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426943   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426972   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:05.426892   72583 retry.go:31] will retry after 1.00384113s: waiting for machine to come up
	I0717 01:55:05.376907   71146 start.go:360] acquireMachinesLock for embed-certs-940222: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:55:06.432146   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432502   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432525   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:06.432477   72583 retry.go:31] will retry after 1.472142907s: waiting for machine to come up
	I0717 01:55:07.906974   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907407   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907437   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:07.907336   72583 retry.go:31] will retry after 1.775986179s: waiting for machine to come up
	I0717 01:55:09.685396   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685792   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685822   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:09.685756   72583 retry.go:31] will retry after 2.663700716s: waiting for machine to come up
	I0717 01:55:12.351616   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.351985   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.352017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:12.351921   72583 retry.go:31] will retry after 2.409004894s: waiting for machine to come up
	I0717 01:55:14.763493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763859   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763876   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:14.763828   72583 retry.go:31] will retry after 3.049843419s: waiting for machine to come up
	I0717 01:55:19.031713   71603 start.go:364] duration metric: took 4m8.751453112s to acquireMachinesLock for "no-preload-391501"
	I0717 01:55:19.031779   71603 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:19.031787   71603 fix.go:54] fixHost starting: 
	I0717 01:55:19.032306   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:19.032352   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:19.049376   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0717 01:55:19.049877   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:19.050387   71603 main.go:141] libmachine: Using API Version  1
	I0717 01:55:19.050409   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:19.050752   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:19.050935   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:19.051104   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 01:55:19.052805   71603 fix.go:112] recreateIfNeeded on no-preload-391501: state=Stopped err=<nil>
	I0717 01:55:19.052832   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	W0717 01:55:19.052989   71603 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:19.056667   71603 out.go:177] * Restarting existing kvm2 VM for "no-preload-391501" ...
	I0717 01:55:19.058078   71603 main.go:141] libmachine: (no-preload-391501) Calling .Start
	I0717 01:55:19.058314   71603 main.go:141] libmachine: (no-preload-391501) Ensuring networks are active...
	I0717 01:55:19.059126   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network default is active
	I0717 01:55:19.059466   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network mk-no-preload-391501 is active
	I0717 01:55:19.059958   71603 main.go:141] libmachine: (no-preload-391501) Getting domain xml...
	I0717 01:55:19.060746   71603 main.go:141] libmachine: (no-preload-391501) Creating domain...
	I0717 01:55:17.816307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.816746   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Found IP for machine: 192.168.39.170
	I0717 01:55:17.816765   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserving static IP address...
	I0717 01:55:17.816776   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has current primary IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.817337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserved static IP address: 192.168.39.170
	I0717 01:55:17.817366   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for SSH to be available...
	I0717 01:55:17.817389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.817420   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | skip adding static IP to network mk-default-k8s-diff-port-738184 - found existing host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"}
	I0717 01:55:17.817443   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Getting to WaitForSSH function...
	I0717 01:55:17.819693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820022   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.820056   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820171   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH client type: external
	I0717 01:55:17.820203   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa (-rw-------)
	I0717 01:55:17.820245   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:17.820259   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | About to run SSH command:
	I0717 01:55:17.820280   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | exit 0
	I0717 01:55:17.942987   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:17.943370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetConfigRaw
	I0717 01:55:17.943945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:17.946638   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.946993   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.947021   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.947268   71522 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/config.json ...
	I0717 01:55:17.947479   71522 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:17.947497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:17.947732   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:17.950032   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.950397   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950489   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:17.950664   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950827   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950959   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:17.951108   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:17.951300   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:17.951311   71522 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:18.051147   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:18.051180   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051421   71522 buildroot.go:166] provisioning hostname "default-k8s-diff-port-738184"
	I0717 01:55:18.051456   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051655   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.054480   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055024   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.055053   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055262   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.055473   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055643   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.055928   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.056077   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.056089   71522 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-738184 && echo "default-k8s-diff-port-738184" | sudo tee /etc/hostname
	I0717 01:55:18.170268   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-738184
	
	I0717 01:55:18.170299   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.173037   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.173369   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173485   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.173673   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173851   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173957   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.174110   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.174322   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.174349   71522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-738184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-738184/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-738184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:18.279963   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:18.279997   71522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:18.280030   71522 buildroot.go:174] setting up certificates
	I0717 01:55:18.280042   71522 provision.go:84] configureAuth start
	I0717 01:55:18.280054   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.280393   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:18.282887   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283201   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.283231   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.285399   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.285691   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285795   71522 provision.go:143] copyHostCerts
	I0717 01:55:18.285865   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:18.285884   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:18.285971   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:18.286084   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:18.286095   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:18.286129   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:18.286205   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:18.286214   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:18.286247   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:18.286313   71522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-738184 san=[127.0.0.1 192.168.39.170 default-k8s-diff-port-738184 localhost minikube]
	I0717 01:55:18.386547   71522 provision.go:177] copyRemoteCerts
	I0717 01:55:18.386627   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:18.386658   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.388930   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.389322   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389465   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.389662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.389804   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.389944   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.469031   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:18.493607   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 01:55:18.517024   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:18.539757   71522 provision.go:87] duration metric: took 259.702663ms to configureAuth
	I0717 01:55:18.539793   71522 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:18.540064   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:18.540178   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.542831   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543174   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.543196   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543388   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.543599   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.543843   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.544011   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.544172   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.544343   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.544362   71522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:18.804633   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:18.804690   71522 machine.go:97] duration metric: took 857.197634ms to provisionDockerMachine
	I0717 01:55:18.804706   71522 start.go:293] postStartSetup for "default-k8s-diff-port-738184" (driver="kvm2")
	I0717 01:55:18.804720   71522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:18.804743   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:18.805049   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:18.805073   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.807835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808127   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.808147   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808319   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.808497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.808670   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.808823   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.889297   71522 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:18.893587   71522 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:18.893615   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:18.893694   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:18.893779   71522 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:18.893886   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:18.903319   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:18.927700   71522 start.go:296] duration metric: took 122.979492ms for postStartSetup
	I0717 01:55:18.927748   71522 fix.go:56] duration metric: took 18.552636525s for fixHost
	I0717 01:55:18.927775   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.930483   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.930768   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.930791   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.931004   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.931192   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931511   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.931677   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.931873   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.931887   71522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:19.031515   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181319.004563133
	
	I0717 01:55:19.031541   71522 fix.go:216] guest clock: 1721181319.004563133
	I0717 01:55:19.031552   71522 fix.go:229] Guest: 2024-07-17 01:55:19.004563133 +0000 UTC Remote: 2024-07-17 01:55:18.927754613 +0000 UTC m=+253.390645105 (delta=76.80852ms)
	I0717 01:55:19.031611   71522 fix.go:200] guest clock delta is within tolerance: 76.80852ms
	I0717 01:55:19.031623   71522 start.go:83] releasing machines lock for "default-k8s-diff-port-738184", held for 18.656540342s
	I0717 01:55:19.031661   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.031940   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:19.034537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.034881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.034911   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.035036   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035557   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035750   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035822   71522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:19.035875   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.036000   71522 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:19.036027   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.038509   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038860   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.038892   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038935   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038982   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039156   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039328   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.039361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.039389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.039488   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.039537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039702   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.040047   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.140208   71522 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:19.146454   71522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:19.293584   71522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:19.300750   71522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:19.300817   71522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:19.321596   71522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:19.321621   71522 start.go:495] detecting cgroup driver to use...
	I0717 01:55:19.321684   71522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:19.337664   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:19.351856   71522 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:19.351922   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:19.366355   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:19.380735   71522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:19.495916   71522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:19.646426   71522 docker.go:233] disabling docker service ...
	I0717 01:55:19.646501   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:19.665764   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:19.683893   71522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:19.814704   71522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:19.958389   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:19.973223   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:19.992869   71522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:55:19.992937   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.003696   71522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:20.003762   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.014415   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.025303   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.036715   71522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:20.047872   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.059666   71522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.079479   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.092424   71522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:20.103225   71522 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:20.103284   71522 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:20.120620   71522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:20.136439   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:20.284796   71522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:20.427605   71522 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:20.427698   71522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:20.433477   71522 start.go:563] Will wait 60s for crictl version
	I0717 01:55:20.433537   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:55:20.437399   71522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:20.479192   71522 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:20.479289   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.507655   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.537084   71522 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:55:20.538435   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:20.541200   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:20.541531   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541772   71522 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:20.546261   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:20.559802   71522 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:20.559946   71522 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:55:20.560001   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:20.381503   71603 main.go:141] libmachine: (no-preload-391501) Waiting to get IP...
	I0717 01:55:20.382632   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.383105   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.383210   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.383077   72724 retry.go:31] will retry after 193.198351ms: waiting for machine to come up
	I0717 01:55:20.577611   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.578117   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.578145   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.578067   72724 retry.go:31] will retry after 254.406992ms: waiting for machine to come up
	I0717 01:55:20.834633   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.835088   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.835116   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.835057   72724 retry.go:31] will retry after 459.446617ms: waiting for machine to come up
	I0717 01:55:21.295939   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.296384   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.296409   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.296343   72724 retry.go:31] will retry after 515.654185ms: waiting for machine to come up
	I0717 01:55:21.813613   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.814140   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.814178   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.814104   72724 retry.go:31] will retry after 652.322198ms: waiting for machine to come up
	I0717 01:55:22.468223   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:22.468858   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:22.468897   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:22.468774   72724 retry.go:31] will retry after 767.220835ms: waiting for machine to come up
	I0717 01:55:23.237341   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:23.237685   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:23.237716   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:23.237633   72724 retry.go:31] will retry after 1.083873631s: waiting for machine to come up
	I0717 01:55:24.323463   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:24.323983   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:24.324011   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:24.323934   72724 retry.go:31] will retry after 1.255667305s: waiting for machine to come up
	I0717 01:55:20.597329   71522 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:55:20.597409   71522 ssh_runner.go:195] Run: which lz4
	I0717 01:55:20.602100   71522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:55:20.606863   71522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:55:20.606900   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:55:22.053002   71522 crio.go:462] duration metric: took 1.450939378s to copy over tarball
	I0717 01:55:22.053071   71522 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:24.356349   71522 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.303245698s)
	I0717 01:55:24.356378   71522 crio.go:469] duration metric: took 2.303353381s to extract the tarball
	I0717 01:55:24.356385   71522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:55:24.402866   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:24.446681   71522 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:55:24.446709   71522 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:55:24.446720   71522 kubeadm.go:934] updating node { 192.168.39.170 8444 v1.30.2 crio true true} ...
	I0717 01:55:24.446844   71522 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-738184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:24.446931   71522 ssh_runner.go:195] Run: crio config
	I0717 01:55:24.499717   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:24.499744   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:24.499759   71522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:24.499787   71522 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-738184 NodeName:default-k8s-diff-port-738184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:24.499965   71522 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-738184"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:24.500039   71522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:55:24.510488   71522 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:24.510568   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:24.520830   71522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 01:55:24.538018   71522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:55:24.556287   71522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 01:55:24.574973   71522 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:24.579058   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:24.591752   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:24.712285   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:24.729387   71522 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184 for IP: 192.168.39.170
	I0717 01:55:24.729411   71522 certs.go:194] generating shared ca certs ...
	I0717 01:55:24.729432   71522 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:24.729596   71522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:24.729650   71522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:24.729662   71522 certs.go:256] generating profile certs ...
	I0717 01:55:24.729776   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/client.key
	I0717 01:55:24.729847   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key.44902a6f
	I0717 01:55:24.729907   71522 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key
	I0717 01:55:24.730044   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:24.730086   71522 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:24.730099   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:24.730135   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:24.730183   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:24.730222   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:24.730277   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:24.731142   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:24.762240   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:24.788746   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:24.825379   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:24.853821   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 01:55:24.887105   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:55:24.910834   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:24.934566   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:55:24.959709   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:24.983722   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:25.007312   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:25.031576   71522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:25.049348   71522 ssh_runner.go:195] Run: openssl version
	I0717 01:55:25.055410   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:25.066104   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070616   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070675   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.076604   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:25.087284   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:25.098383   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103262   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103331   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.109170   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:25.119940   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:25.130829   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135659   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135734   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.141583   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:25.152770   71522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:25.157395   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:25.163543   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:25.169580   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:25.175754   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:25.181771   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:25.187935   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:25.193614   71522 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:25.193727   71522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:25.193770   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.230871   71522 cri.go:89] found id: ""
	I0717 01:55:25.230954   71522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:25.241336   71522 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:25.241357   71522 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:25.241410   71522 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:25.251637   71522 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:25.253030   71522 kubeconfig.go:125] found "default-k8s-diff-port-738184" server: "https://192.168.39.170:8444"
	I0717 01:55:25.255926   71522 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:25.265878   71522 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.170
	I0717 01:55:25.265915   71522 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:25.265927   71522 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:25.265982   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.305929   71522 cri.go:89] found id: ""
	I0717 01:55:25.306015   71522 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:25.322581   71522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:25.332334   71522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:25.332356   71522 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:25.332407   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:55:25.342132   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:25.342193   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:25.351628   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:55:25.360765   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:25.360833   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:25.370167   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.379057   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:25.379124   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.389470   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:55:25.399142   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:25.399210   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:25.409452   71522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:25.421509   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.545698   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.580838   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:25.581295   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:25.581322   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:25.581247   72724 retry.go:31] will retry after 1.354947672s: waiting for machine to come up
	I0717 01:55:26.937260   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:26.937746   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:26.937774   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:26.937696   72724 retry.go:31] will retry after 1.818074273s: waiting for machine to come up
	I0717 01:55:28.758015   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:28.758489   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:28.758517   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:28.758449   72724 retry.go:31] will retry after 2.782465023s: waiting for machine to come up
	I0717 01:55:26.599380   71522 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053644988s)
	I0717 01:55:26.599416   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.807765   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.878767   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.965940   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:26.966023   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.466587   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.966138   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.983649   71522 api_server.go:72] duration metric: took 1.017709312s to wait for apiserver process to appear ...
	I0717 01:55:27.983678   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:55:27.983701   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:27.984214   71522 api_server.go:269] stopped: https://192.168.39.170:8444/healthz: Get "https://192.168.39.170:8444/healthz": dial tcp 192.168.39.170:8444: connect: connection refused
	I0717 01:55:28.483780   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.862416   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.862464   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.862479   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.869667   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.869718   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.983899   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.988670   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:30.988704   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.484233   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.488939   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:31.488978   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.984611   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.988738   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:55:31.996182   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:55:31.996207   71522 api_server.go:131] duration metric: took 4.012523131s to wait for apiserver health ...
	I0717 01:55:31.996216   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:31.996222   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:31.998122   71522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:55:31.999536   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:55:32.010501   71522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:55:32.030227   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:55:32.039923   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:55:32.039954   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039988   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039998   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:55:32.040003   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:55:32.040013   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:55:32.040020   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:55:32.040033   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:55:32.040041   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:55:32.040046   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:55:32.040053   71522 system_pods.go:74] duration metric: took 9.802793ms to wait for pod list to return data ...
	I0717 01:55:32.040060   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:55:32.043233   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:55:32.043259   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:55:32.043270   71522 node_conditions.go:105] duration metric: took 3.202451ms to run NodePressure ...
	I0717 01:55:32.043285   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:32.350948   71522 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356119   71522 kubeadm.go:739] kubelet initialised
	I0717 01:55:32.356143   71522 kubeadm.go:740] duration metric: took 5.164025ms waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356153   71522 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:32.361501   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.366747   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366770   71522 pod_ready.go:81] duration metric: took 5.246954ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.366778   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366785   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.371049   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371066   71522 pod_ready.go:81] duration metric: took 4.275157ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.371073   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371078   71522 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.375338   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375361   71522 pod_ready.go:81] duration metric: took 4.27092ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.375369   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375379   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.434545   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434583   71522 pod_ready.go:81] duration metric: took 59.196717ms for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.434593   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434601   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.836139   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836178   71522 pod_ready.go:81] duration metric: took 401.568097ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.836194   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836212   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.234032   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234060   71522 pod_ready.go:81] duration metric: took 397.83937ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.234071   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234076   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.633953   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633981   71522 pod_ready.go:81] duration metric: took 399.893316ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.633992   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633998   71522 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:34.034511   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034560   71522 pod_ready.go:81] duration metric: took 400.544281ms for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:34.034574   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034583   71522 pod_ready.go:38] duration metric: took 1.678420144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:34.034599   71522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:55:34.049235   71522 ops.go:34] apiserver oom_adj: -16
	I0717 01:55:34.049261   71522 kubeadm.go:597] duration metric: took 8.807897214s to restartPrimaryControlPlane
	I0717 01:55:34.049272   71522 kubeadm.go:394] duration metric: took 8.855664434s to StartCluster
	I0717 01:55:34.049292   71522 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.049374   71522 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:55:34.050992   71522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.051239   71522 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:55:34.051307   71522 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:55:34.051409   71522 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051454   71522 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051465   71522 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:55:34.051497   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051511   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:34.051498   71522 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051502   71522 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051564   71522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-738184"
	I0717 01:55:34.051587   71522 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051612   71522 addons.go:243] addon metrics-server should already be in state true
	I0717 01:55:34.051686   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051803   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.051845   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052097   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052151   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052331   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052383   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.054788   71522 out.go:177] * Verifying Kubernetes components...
	I0717 01:55:34.056293   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0717 01:55:34.067821   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.067911   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068370   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068390   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068515   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068526   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0717 01:55:34.068535   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068709   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.068991   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068997   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.069278   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069320   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069529   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069560   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069611   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.069629   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.069977   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.070184   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.074013   71522 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.074036   71522 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:55:34.074062   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.074422   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.074463   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.085256   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0717 01:55:34.085694   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0717 01:55:34.085716   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086207   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086378   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086402   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.086785   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.086945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.086947   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086999   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.087327   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.087624   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.088695   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.089320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.090932   71522 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:55:34.090932   71522 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:31.543587   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:31.544073   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:31.544102   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:31.544012   72724 retry.go:31] will retry after 2.898539616s: waiting for machine to come up
	I0717 01:55:34.444315   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:34.444828   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:34.444870   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:34.444790   72724 retry.go:31] will retry after 4.252719028s: waiting for machine to come up
	I0717 01:55:34.092892   71522 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.092910   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:55:34.092926   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.092985   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:55:34.092993   71522 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:55:34.093003   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.095340   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0717 01:55:34.095840   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.096397   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.096434   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.096567   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.096819   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.096979   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097029   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097058   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097498   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.097536   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.097881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097897   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097899   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097923   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.098075   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098105   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098286   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098449   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.098461   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.113190   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0717 01:55:34.113544   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.114033   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.114059   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.114375   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.114575   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.116332   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.116544   71522 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.116563   71522 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:55:34.116583   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.119693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.119992   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.120017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.120457   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.120722   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.120965   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.121652   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.247964   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:34.266521   71522 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:34.370296   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:55:34.370318   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:55:34.380102   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.394620   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:55:34.394639   71522 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:55:34.409328   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.416653   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:34.416684   71522 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:55:34.445296   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:35.605781   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196419762s)
	I0717 01:55:35.605843   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605858   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605854   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160520147s)
	I0717 01:55:35.605778   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.225640358s)
	I0717 01:55:35.605929   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605944   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605988   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606007   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606293   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606300   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606309   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606315   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606319   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606329   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606333   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606349   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606357   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606371   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606398   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606410   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606424   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606640   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607811   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607852   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607866   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607874   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607892   71522 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-738184"
	I0717 01:55:35.607815   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607878   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607829   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607959   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607842   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.613691   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.613717   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.614019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.614025   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.614081   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.615871   71522 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0717 01:55:38.700025   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.700533   71603 main.go:141] libmachine: (no-preload-391501) Found IP for machine: 192.168.61.174
	I0717 01:55:38.700555   71603 main.go:141] libmachine: (no-preload-391501) Reserving static IP address...
	I0717 01:55:38.700572   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has current primary IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.701013   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.701033   71603 main.go:141] libmachine: (no-preload-391501) Reserved static IP address: 192.168.61.174
	I0717 01:55:38.701049   71603 main.go:141] libmachine: (no-preload-391501) DBG | skip adding static IP to network mk-no-preload-391501 - found existing host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"}
	I0717 01:55:38.701064   71603 main.go:141] libmachine: (no-preload-391501) DBG | Getting to WaitForSSH function...
	I0717 01:55:38.701080   71603 main.go:141] libmachine: (no-preload-391501) Waiting for SSH to be available...
	I0717 01:55:38.703218   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703577   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.703605   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703755   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH client type: external
	I0717 01:55:38.703773   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa (-rw-------)
	I0717 01:55:38.703791   71603 main.go:141] libmachine: (no-preload-391501) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:38.703809   71603 main.go:141] libmachine: (no-preload-391501) DBG | About to run SSH command:
	I0717 01:55:38.703817   71603 main.go:141] libmachine: (no-preload-391501) DBG | exit 0
	I0717 01:55:38.827046   71603 main.go:141] libmachine: (no-preload-391501) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:38.827413   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetConfigRaw
	I0717 01:55:38.828102   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:38.831229   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.831782   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.831814   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.832140   71603 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/config.json ...
	I0717 01:55:38.832347   71603 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:38.832367   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:38.832574   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.835302   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835710   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.835735   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835954   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.836173   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836345   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836521   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.836691   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.836928   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.836947   71603 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:38.943173   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:38.943213   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943491   71603 buildroot.go:166] provisioning hostname "no-preload-391501"
	I0717 01:55:38.943513   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943725   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.946396   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946872   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.946900   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946980   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.947164   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947339   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947518   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.947695   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.947849   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.947869   71603 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-391501 && echo "no-preload-391501" | sudo tee /etc/hostname
	I0717 01:55:39.070382   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-391501
	
	I0717 01:55:39.070429   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.073539   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.073904   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.073941   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.074203   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.074426   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074624   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.075132   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.075348   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.075373   71603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-391501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-391501/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-391501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:39.195604   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:39.195634   71603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:39.195649   71603 buildroot.go:174] setting up certificates
	I0717 01:55:39.195656   71603 provision.go:84] configureAuth start
	I0717 01:55:39.195665   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:39.195952   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:39.198409   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198792   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.198822   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198996   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.201509   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.201870   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.201901   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.202078   71603 provision.go:143] copyHostCerts
	I0717 01:55:39.202153   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:39.202166   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:39.202221   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:39.202313   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:39.202320   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:39.202339   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:39.202387   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:39.202394   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:39.202410   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:39.202456   71603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.no-preload-391501 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-391501]
	I0717 01:55:39.550166   71603 provision.go:177] copyRemoteCerts
	I0717 01:55:39.550224   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:39.550249   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.552616   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.552990   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.553020   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.553135   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.553298   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.553460   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.553559   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:39.638467   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:39.664166   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:55:39.689416   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:55:39.714130   71603 provision.go:87] duration metric: took 518.463378ms to configureAuth
	I0717 01:55:39.714159   71603 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:39.714362   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:55:39.714440   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.717269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717694   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.717722   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.718080   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718240   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718393   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.718621   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.718793   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.718809   71603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:39.982066   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:39.982095   71603 machine.go:97] duration metric: took 1.149734372s to provisionDockerMachine
	I0717 01:55:39.982110   71603 start.go:293] postStartSetup for "no-preload-391501" (driver="kvm2")
	I0717 01:55:39.982127   71603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:39.982147   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:39.982429   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:39.982445   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.984935   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985232   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.985269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985372   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.985553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.985793   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.986010   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.074439   71603 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:40.079515   71603 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:40.079541   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:40.079617   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:40.079708   71603 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:40.079831   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:40.090783   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:40.121212   71603 start.go:296] duration metric: took 139.087761ms for postStartSetup
	I0717 01:55:40.121257   71603 fix.go:56] duration metric: took 21.089468917s for fixHost
	I0717 01:55:40.121281   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.124208   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124517   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.124545   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124753   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.124940   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125119   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125269   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.125430   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:40.125626   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:40.125638   71603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:40.239538   71929 start.go:364] duration metric: took 3m52.741834986s to acquireMachinesLock for "old-k8s-version-901761"
	I0717 01:55:40.239610   71929 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:40.239618   71929 fix.go:54] fixHost starting: 
	I0717 01:55:40.240021   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:40.240054   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:40.257464   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0717 01:55:40.257866   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:40.258287   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:55:40.258311   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:40.258672   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:40.258871   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:40.259041   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetState
	I0717 01:55:40.260529   71929 fix.go:112] recreateIfNeeded on old-k8s-version-901761: state=Stopped err=<nil>
	I0717 01:55:40.260568   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	W0717 01:55:40.260721   71929 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:40.262590   71929 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-901761" ...
	I0717 01:55:35.617123   71522 addons.go:510] duration metric: took 1.565817066s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0717 01:55:36.270109   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:38.270489   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.270966   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.239384   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181340.205508074
	
	I0717 01:55:40.239409   71603 fix.go:216] guest clock: 1721181340.205508074
	I0717 01:55:40.239419   71603 fix.go:229] Guest: 2024-07-17 01:55:40.205508074 +0000 UTC Remote: 2024-07-17 01:55:40.121261572 +0000 UTC m=+269.976034747 (delta=84.246502ms)
	I0717 01:55:40.239445   71603 fix.go:200] guest clock delta is within tolerance: 84.246502ms
	I0717 01:55:40.239453   71603 start.go:83] releasing machines lock for "no-preload-391501", held for 21.207695176s
	I0717 01:55:40.239486   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.239768   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:40.242534   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.242923   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.242956   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.243159   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243649   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243826   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243924   71603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:40.243975   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.244045   71603 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:40.244071   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.246599   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.246958   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.246984   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247089   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247153   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247254   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.247401   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.247486   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.247510   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247579   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.247669   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247861   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.248031   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.248169   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.328497   71603 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:40.350092   71603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:40.497644   71603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:40.504094   71603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:40.504164   71603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:40.526752   71603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:40.526777   71603 start.go:495] detecting cgroup driver to use...
	I0717 01:55:40.526842   71603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:40.543537   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:40.557551   71603 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:40.557606   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:40.571755   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:40.585548   71603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:40.702991   71603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:40.849192   71603 docker.go:233] disabling docker service ...
	I0717 01:55:40.849276   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:40.864697   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:40.877940   71603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:41.043588   71603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:41.175359   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:41.191170   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:41.212440   71603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:55:41.212508   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.224335   71603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:41.224411   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.235721   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.247575   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.260018   71603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:41.271526   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.285999   71603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.307653   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.319272   71603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:41.330544   71603 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:41.330637   71603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:41.346698   71603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:41.361983   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:41.490052   71603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:41.639509   71603 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:41.639626   71603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:41.646714   71603 start.go:563] Will wait 60s for crictl version
	I0717 01:55:41.646793   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.650900   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:41.688112   71603 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:41.688188   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.717335   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.750767   71603 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:55:40.263857   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .Start
	I0717 01:55:40.264019   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring networks are active...
	I0717 01:55:40.264709   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network default is active
	I0717 01:55:40.265165   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network mk-old-k8s-version-901761 is active
	I0717 01:55:40.265581   71929 main.go:141] libmachine: (old-k8s-version-901761) Getting domain xml...
	I0717 01:55:40.266340   71929 main.go:141] libmachine: (old-k8s-version-901761) Creating domain...
	I0717 01:55:41.562582   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting to get IP...
	I0717 01:55:41.563329   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.563802   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.563890   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.563781   72905 retry.go:31] will retry after 216.264296ms: waiting for machine to come up
	I0717 01:55:41.781168   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.781662   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.781690   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.781629   72905 retry.go:31] will retry after 275.269814ms: waiting for machine to come up
	I0717 01:55:42.058127   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.058525   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.058564   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.058498   72905 retry.go:31] will retry after 348.024497ms: waiting for machine to come up
	I0717 01:55:41.752123   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:41.755114   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755571   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:41.755602   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755863   71603 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:41.760869   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:41.775414   71603 kubeadm.go:883] updating cluster {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:41.775563   71603 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:55:41.775609   71603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:41.815115   71603 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 01:55:41.815141   71603 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.815241   71603 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.815279   71603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.815290   71603 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.815304   71603 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:55:41.815239   71603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.815258   71603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817894   71603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.817939   71603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817892   71603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.817888   71603 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:55:41.818033   71603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.817891   71603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.817900   71603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.817978   71603 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.014545   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 01:55:42.030064   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.034517   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.123584   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.130122   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.134935   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.136170   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.173650   71603 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 01:55:42.173707   71603 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.173718   71603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 01:55:42.173755   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.173767   71603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.173820   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.219689   71603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 01:55:42.219745   71603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.219792   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.240802   71603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 01:55:42.240847   71603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.240907   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.251152   71603 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 01:55:42.251189   71603 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.251225   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254790   71603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 01:55:42.254849   71603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.254886   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.254895   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254916   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.254951   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.255006   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.257984   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.267440   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.395407   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395471   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395513   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395522   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395558   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395582   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:55:42.395592   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395663   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:42.397740   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:55:42.397813   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:42.420577   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420602   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420619   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420640   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420662   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420676   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420705   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 01:55:42.420711   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420738   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 01:55:43.737662   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581683   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.160996964s)
	I0717 01:55:44.581730   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 01:55:44.581753   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581754   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.161058602s)
	I0717 01:55:44.581788   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 01:55:44.581810   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581858   71603 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 01:55:44.581900   71603 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581928   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.270830   71522 node_ready.go:49] node "default-k8s-diff-port-738184" has status "Ready":"True"
	I0717 01:55:41.270853   71522 node_ready.go:38] duration metric: took 7.004304151s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:41.270868   71522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:41.278587   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285210   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.285236   71522 pod_ready.go:81] duration metric: took 6.623347ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285250   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291110   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.291133   71522 pod_ready.go:81] duration metric: took 5.874809ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291145   71522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297614   71522 pod_ready.go:92] pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.297636   71522 pod_ready.go:81] duration metric: took 6.483783ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297645   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305307   71522 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.305335   71522 pod_ready.go:81] duration metric: took 1.007681338s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305349   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472190   71522 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.472222   71522 pod_ready.go:81] duration metric: took 166.864153ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472236   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871756   71522 pod_ready.go:92] pod "kube-proxy-c4n94" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.871780   71522 pod_ready.go:81] duration metric: took 399.536375ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871789   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272858   71522 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:43.272895   71522 pod_ready.go:81] duration metric: took 401.098971ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272913   71522 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:45.281019   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:42.407813   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.408311   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.408346   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.408218   72905 retry.go:31] will retry after 388.717436ms: waiting for machine to come up
	I0717 01:55:42.798810   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.799378   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.799411   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.799323   72905 retry.go:31] will retry after 661.391346ms: waiting for machine to come up
	I0717 01:55:43.462189   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:43.462654   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:43.462686   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:43.462603   72905 retry.go:31] will retry after 636.142497ms: waiting for machine to come up
	I0717 01:55:44.100416   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.100852   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.100874   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.100808   72905 retry.go:31] will retry after 781.652918ms: waiting for machine to come up
	I0717 01:55:44.883650   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.884137   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.884170   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.884088   72905 retry.go:31] will retry after 1.238608293s: waiting for machine to come up
	I0717 01:55:46.124419   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:46.124911   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:46.124942   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:46.124854   72905 retry.go:31] will retry after 1.169011508s: waiting for machine to come up
	I0717 01:55:47.295202   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:47.295679   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:47.295715   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:47.295632   72905 retry.go:31] will retry after 1.723987128s: waiting for machine to come up
	I0717 01:55:47.004929   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.423090292s)
	I0717 01:55:47.004968   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 01:55:47.004990   71603 ssh_runner.go:235] Completed: which crictl: (2.423045276s)
	I0717 01:55:47.005021   71603 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:47.005053   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:47.005067   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:49.097703   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.092610651s)
	I0717 01:55:49.097747   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 01:55:49.097776   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097836   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097776   71603 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.092700925s)
	I0717 01:55:49.097953   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 01:55:49.098050   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:47.781233   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.786039   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.020883   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:49.021363   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:49.021396   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:49.021279   72905 retry.go:31] will retry after 2.098481296s: waiting for machine to come up
	I0717 01:55:51.121693   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:51.122253   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:51.122282   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:51.122192   72905 retry.go:31] will retry after 2.624839429s: waiting for machine to come up
	I0717 01:55:50.560197   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.462322087s)
	I0717 01:55:50.560292   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 01:55:50.560323   71603 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560252   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.462175943s)
	I0717 01:55:50.560373   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560388   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 01:55:53.630471   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.070071936s)
	I0717 01:55:53.630509   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 01:55:53.630529   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:53.630604   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:52.280585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:54.779606   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:53.748796   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:53.749348   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:53.749390   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:53.749298   72905 retry.go:31] will retry after 3.47930356s: waiting for machine to come up
	I0717 01:55:57.231901   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232407   71929 main.go:141] libmachine: (old-k8s-version-901761) Found IP for machine: 192.168.50.44
	I0717 01:55:57.232437   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has current primary IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232449   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserving static IP address...
	I0717 01:55:57.232880   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserved static IP address: 192.168.50.44
	I0717 01:55:57.232928   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.232937   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting for SSH to be available...
	I0717 01:55:57.232952   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | skip adding static IP to network mk-old-k8s-version-901761 - found existing host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"}
	I0717 01:55:57.232971   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Getting to WaitForSSH function...
	I0717 01:55:57.235007   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235208   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.235242   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235421   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH client type: external
	I0717 01:55:57.235461   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa (-rw-------)
	I0717 01:55:57.235502   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:57.235516   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | About to run SSH command:
	I0717 01:55:57.235530   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | exit 0
	I0717 01:55:57.362619   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:57.363106   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetConfigRaw
	I0717 01:55:57.363760   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.366213   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366636   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.366666   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366958   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:55:57.367165   71929 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:57.367188   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:57.367392   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.370017   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370354   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.370371   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370577   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.370765   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.370935   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.371084   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.371325   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.371506   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.371518   71929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:58.531714   71146 start.go:364] duration metric: took 53.154741813s to acquireMachinesLock for "embed-certs-940222"
	I0717 01:55:58.531773   71146 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:58.531784   71146 fix.go:54] fixHost starting: 
	I0717 01:55:58.532189   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:58.532237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:58.549026   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0717 01:55:58.549491   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:58.550001   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:55:58.550025   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:58.550363   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:58.550536   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:55:58.550707   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:55:58.552236   71146 fix.go:112] recreateIfNeeded on embed-certs-940222: state=Stopped err=<nil>
	I0717 01:55:58.552259   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	W0717 01:55:58.552397   71146 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:58.554487   71146 out.go:177] * Restarting existing kvm2 VM for "embed-certs-940222" ...
	I0717 01:55:57.478893   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:57.478921   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479123   71929 buildroot.go:166] provisioning hostname "old-k8s-version-901761"
	I0717 01:55:57.479142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479330   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.482163   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482531   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.482579   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482739   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.482937   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483111   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483264   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.483454   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.483632   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.483648   71929 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-901761 && echo "old-k8s-version-901761" | sudo tee /etc/hostname
	I0717 01:55:57.613409   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-901761
	
	I0717 01:55:57.613440   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.616228   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616614   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.616655   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616860   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.617040   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617222   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617383   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.617574   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.617778   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.617794   71929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-901761' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-901761/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-901761' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:57.737648   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:57.737683   71929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:57.737703   71929 buildroot.go:174] setting up certificates
	I0717 01:55:57.737711   71929 provision.go:84] configureAuth start
	I0717 01:55:57.737721   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.738028   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.741089   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741532   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.741556   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741741   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.744444   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.744917   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.744947   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.745111   71929 provision.go:143] copyHostCerts
	I0717 01:55:57.745185   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:57.745202   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:57.745273   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:57.745393   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:57.745405   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:57.745437   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:57.745517   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:57.745527   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:57.745545   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:57.745602   71929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-901761 san=[127.0.0.1 192.168.50.44 localhost minikube old-k8s-version-901761]
	I0717 01:55:57.830872   71929 provision.go:177] copyRemoteCerts
	I0717 01:55:57.830939   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:57.830972   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.833463   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833741   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.833777   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833887   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.834083   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.834250   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.834403   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:57.918346   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:57.954250   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:57.979770   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:55:58.005161   71929 provision.go:87] duration metric: took 267.436975ms to configureAuth
	I0717 01:55:58.005193   71929 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:58.005412   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:55:58.005493   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.008255   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008626   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.008663   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008833   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.009006   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009170   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009298   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.009464   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.009616   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.009639   71929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:58.281081   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:58.281112   71929 machine.go:97] duration metric: took 913.933405ms to provisionDockerMachine
	I0717 01:55:58.281121   71929 start.go:293] postStartSetup for "old-k8s-version-901761" (driver="kvm2")
	I0717 01:55:58.281130   71929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:58.281144   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.281497   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:58.281533   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.284465   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.284812   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.284840   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.285023   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.285207   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.285441   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.285650   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.377149   71929 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:58.381709   71929 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:58.381731   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:58.381798   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:58.381887   71929 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:58.381972   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:58.392916   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:58.420677   71929 start.go:296] duration metric: took 139.542186ms for postStartSetup
	I0717 01:55:58.420721   71929 fix.go:56] duration metric: took 18.181102939s for fixHost
	I0717 01:55:58.420745   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.423582   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.423961   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.423989   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.424169   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.424372   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424557   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424693   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.424859   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.425040   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.425053   71929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:58.531563   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181358.508735025
	
	I0717 01:55:58.531585   71929 fix.go:216] guest clock: 1721181358.508735025
	I0717 01:55:58.531594   71929 fix.go:229] Guest: 2024-07-17 01:55:58.508735025 +0000 UTC Remote: 2024-07-17 01:55:58.420726806 +0000 UTC m=+251.057483904 (delta=88.008219ms)
	I0717 01:55:58.531617   71929 fix.go:200] guest clock delta is within tolerance: 88.008219ms
	I0717 01:55:58.531624   71929 start.go:83] releasing machines lock for "old-k8s-version-901761", held for 18.292040224s
	I0717 01:55:58.531655   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.531981   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:58.534476   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.534967   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.534996   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.535258   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535802   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535990   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.536105   71929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:58.536183   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.536244   71929 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:58.536275   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.539139   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539401   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539534   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539560   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539768   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.539815   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539845   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539968   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540000   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.540116   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540243   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.540332   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540468   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.628291   71929 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:58.656964   71929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:58.806516   71929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:58.815051   71929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:58.815113   71929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:58.838575   71929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:58.838596   71929 start.go:495] detecting cgroup driver to use...
	I0717 01:55:58.838662   71929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:58.855728   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:58.875221   71929 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:58.875285   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:58.889781   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:58.903832   71929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:59.026815   71929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:59.173879   71929 docker.go:233] disabling docker service ...
	I0717 01:55:59.173964   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:59.192906   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:59.208262   71929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:59.368178   71929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:59.500335   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:59.514795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:59.535553   71929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:55:59.535631   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.548304   71929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:59.548376   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.563066   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.578452   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.593447   71929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:59.606239   71929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:59.617051   71929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:59.617118   71929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:59.632601   71929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:59.645034   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:59.812343   71929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:59.969366   71929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:59.969444   71929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:59.974286   71929 start.go:563] Will wait 60s for crictl version
	I0717 01:55:59.974335   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:55:59.978280   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:00.020399   71929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:00.020489   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.049811   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.081952   71929 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:55:55.703286   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.07265838s)
	I0717 01:55:55.703312   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 01:55:55.703342   71603 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:55.703396   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:56.651520   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 01:55:56.651563   71603 cache_images.go:123] Successfully loaded all cached images
	I0717 01:55:56.651569   71603 cache_images.go:92] duration metric: took 14.83641531s to LoadCachedImages
	I0717 01:55:56.651581   71603 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:55:56.651702   71603 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-391501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:56.651770   71603 ssh_runner.go:195] Run: crio config
	I0717 01:55:56.700129   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:55:56.700152   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:56.700162   71603 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:56.700189   71603 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-391501 NodeName:no-preload-391501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:56.700315   71603 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-391501"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:56.700372   71603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:55:56.711859   71603 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:56.711936   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:56.721994   71603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 01:55:56.738335   71603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:55:56.755198   71603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 01:55:56.772467   71603 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:56.777580   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:56.792767   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:56.913075   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:56.930746   71603 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501 for IP: 192.168.61.174
	I0717 01:55:56.930768   71603 certs.go:194] generating shared ca certs ...
	I0717 01:55:56.930783   71603 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:56.930929   71603 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:56.930968   71603 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:56.930978   71603 certs.go:256] generating profile certs ...
	I0717 01:55:56.931050   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/client.key
	I0717 01:55:56.931112   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key.a30174c9
	I0717 01:55:56.931153   71603 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key
	I0717 01:55:56.931292   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:56.931331   71603 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:56.931344   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:56.931373   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:56.931404   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:56.931434   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:56.931478   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:56.932180   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:56.971111   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:57.016791   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:57.049766   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:57.078139   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:55:57.109781   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:55:57.137912   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:57.165141   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:55:57.190210   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:57.214366   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:57.239518   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:57.265505   71603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:57.283773   71603 ssh_runner.go:195] Run: openssl version
	I0717 01:55:57.289846   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:57.300434   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305370   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305456   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.311765   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:57.322769   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:57.334122   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338774   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338823   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.344721   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:57.356476   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:57.368672   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374055   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374107   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.380256   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:57.392428   71603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:57.397593   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:57.404378   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:57.411094   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:57.418536   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:57.425312   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:57.431841   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:57.438615   71603 kubeadm.go:392] StartCluster: {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:57.438696   71603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:57.438782   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.482932   71603 cri.go:89] found id: ""
	I0717 01:55:57.482993   71603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:57.493813   71603 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:57.493832   71603 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:57.493872   71603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:57.504757   71603 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:57.505655   71603 kubeconfig.go:125] found "no-preload-391501" server: "https://192.168.61.174:8443"
	I0717 01:55:57.507634   71603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:57.517990   71603 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I0717 01:55:57.518025   71603 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:57.518038   71603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:57.518090   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.557504   71603 cri.go:89] found id: ""
	I0717 01:55:57.557588   71603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:57.574074   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:57.583703   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:57.583724   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:57.583768   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:55:57.593924   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:57.593992   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:57.606945   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:55:57.616803   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:57.616847   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:57.627215   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.637121   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:57.637179   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.646291   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:55:57.655314   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:57.655372   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:57.666994   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:57.677582   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:57.798148   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.316598   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.518419797s)
	I0717 01:55:59.316629   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.581666   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.675003   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.748682   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:59.748771   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:56.781465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:59.280394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:00.083384   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:56:00.086085   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086454   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:56:00.086494   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086710   71929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:00.091322   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:00.104102   71929 kubeadm.go:883] updating cluster {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:00.104237   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:56:00.104309   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:00.152445   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:00.152537   71929 ssh_runner.go:195] Run: which lz4
	I0717 01:56:00.156760   71929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:00.161123   71929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:00.161149   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:56:02.031804   71929 crio.go:462] duration metric: took 1.875087246s to copy over tarball
	I0717 01:56:02.031904   71929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:58.556014   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Start
	I0717 01:55:58.556171   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring networks are active...
	I0717 01:55:58.556866   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network default is active
	I0717 01:55:58.557237   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network mk-embed-certs-940222 is active
	I0717 01:55:58.557686   71146 main.go:141] libmachine: (embed-certs-940222) Getting domain xml...
	I0717 01:55:58.558375   71146 main.go:141] libmachine: (embed-certs-940222) Creating domain...
	I0717 01:55:59.917419   71146 main.go:141] libmachine: (embed-certs-940222) Waiting to get IP...
	I0717 01:55:59.918379   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:55:59.918849   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:55:59.918908   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:55:59.918833   73097 retry.go:31] will retry after 248.560075ms: waiting for machine to come up
	I0717 01:56:00.169337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.169877   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.169898   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.169837   73097 retry.go:31] will retry after 380.159418ms: waiting for machine to come up
	I0717 01:56:00.551472   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.552033   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.552076   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.551987   73097 retry.go:31] will retry after 439.990107ms: waiting for machine to come up
	I0717 01:56:00.993776   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.994337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.994351   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.994319   73097 retry.go:31] will retry after 415.462036ms: waiting for machine to come up
	I0717 01:56:01.412114   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:01.412508   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:01.412535   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:01.412484   73097 retry.go:31] will retry after 660.852153ms: waiting for machine to come up
	I0717 01:56:02.075095   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.075519   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.075541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.075498   73097 retry.go:31] will retry after 788.200532ms: waiting for machine to come up
	I0717 01:56:00.249300   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.749610   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.823943   71603 api_server.go:72] duration metric: took 1.075254107s to wait for apiserver process to appear ...
	I0717 01:56:00.823980   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:00.824006   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:00.825286   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:01.325032   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:01.281044   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:03.281329   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:05.092637   71929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060698331s)
	I0717 01:56:05.092674   71929 crio.go:469] duration metric: took 3.060839356s to extract the tarball
	I0717 01:56:05.092682   71929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:05.135461   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:05.170789   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:05.170814   71929 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:56:05.170853   71929 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.170884   71929 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.170908   71929 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.170961   71929 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:56:05.171077   71929 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.171126   71929 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.171138   71929 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.171462   71929 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172182   71929 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:56:05.172224   71929 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.172296   71929 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.172362   71929 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172415   71929 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.172449   71929 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.372794   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415131   71929 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:56:05.415181   71929 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415231   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.419179   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.446530   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:56:05.452583   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:56:05.485692   71929 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:56:05.485734   71929 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:56:05.485780   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.486154   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.487346   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.489408   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.490486   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:56:05.494929   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.499420   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.593505   71929 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:56:05.593587   71929 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.593638   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.632564   71929 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:56:05.632615   71929 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.632667   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657745   71929 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:56:05.657792   71929 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.657852   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657863   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:56:05.657908   71929 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:56:05.657943   71929 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.657958   71929 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:56:05.657976   71929 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.657980   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658004   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658037   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.658077   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.671679   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.671708   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.736572   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:56:05.736599   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:56:05.736671   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.758178   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:56:05.758210   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:56:05.787948   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:56:06.882199   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:07.025117   71929 cache_images.go:92] duration metric: took 1.854284265s to LoadCachedImages
	W0717 01:56:07.025227   71929 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0717 01:56:07.025245   71929 kubeadm.go:934] updating node { 192.168.50.44 8443 v1.20.0 crio true true} ...
	I0717 01:56:07.025378   71929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-901761 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:07.025465   71929 ssh_runner.go:195] Run: crio config
	I0717 01:56:07.081517   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:56:07.081543   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:07.081560   71929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:07.081584   71929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.44 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901761 NodeName:old-k8s-version-901761 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:56:07.081749   71929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-901761"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:07.081833   71929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:56:07.092233   71929 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:07.092335   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:07.102086   71929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0717 01:56:07.121538   71929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:07.139112   71929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0717 01:56:07.157397   71929 ssh_runner.go:195] Run: grep 192.168.50.44	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:07.161818   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.44	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:07.174723   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:07.307484   71929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:07.325948   71929 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761 for IP: 192.168.50.44
	I0717 01:56:07.325974   71929 certs.go:194] generating shared ca certs ...
	I0717 01:56:07.326002   71929 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.326164   71929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:07.326216   71929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:07.326229   71929 certs.go:256] generating profile certs ...
	I0717 01:56:07.326351   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.key
	I0717 01:56:07.326416   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5
	I0717 01:56:07.326461   71929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key
	I0717 01:56:07.326630   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:07.326668   71929 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:07.326681   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:07.326700   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:07.326724   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:07.326767   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:07.326828   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:07.327702   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:07.377671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:02.864980   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.865620   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.865656   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.865503   73097 retry.go:31] will retry after 1.00461953s: waiting for machine to come up
	I0717 01:56:03.871702   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:03.872187   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:03.872215   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:03.872133   73097 retry.go:31] will retry after 1.15731846s: waiting for machine to come up
	I0717 01:56:05.030767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:05.031263   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:05.031285   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:05.031209   73097 retry.go:31] will retry after 1.704165162s: waiting for machine to come up
	I0717 01:56:06.737975   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:06.738337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:06.738386   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:06.738307   73097 retry.go:31] will retry after 2.014062128s: waiting for machine to come up
	I0717 01:56:06.326066   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:06.326112   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:05.780615   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:08.281127   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:07.413171   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:07.443671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:07.482883   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:56:07.527280   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:07.571200   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:07.612296   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:07.638012   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:07.662018   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:07.688033   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:07.721827   71929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:07.741517   71929 ssh_runner.go:195] Run: openssl version
	I0717 01:56:07.747466   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:07.758615   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763382   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763439   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.769358   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:07.781802   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:07.792763   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797629   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797681   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.803879   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:07.815479   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:07.828292   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832769   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832829   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.838958   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:07.850108   71929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:07.854758   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:07.860661   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:07.866484   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:07.872302   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:07.878252   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:07.884275   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:07.890148   71929 kubeadm.go:392] StartCluster: {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:07.890264   71929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:07.890343   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:07.930081   71929 cri.go:89] found id: ""
	I0717 01:56:07.930153   71929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:07.941371   71929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:07.941396   71929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:07.941445   71929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:07.955229   71929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:07.957263   71929 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:07.959002   71929 kubeconfig.go:62] /home/jenkins/minikube-integration/19264-3908/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-901761" cluster setting kubeconfig missing "old-k8s-version-901761" context setting]
	I0717 01:56:07.960384   71929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.962748   71929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:07.973815   71929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.44
	I0717 01:56:07.973851   71929 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:07.973864   71929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:07.973933   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:08.020169   71929 cri.go:89] found id: ""
	I0717 01:56:08.020247   71929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:08.038015   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:08.049272   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:08.049294   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:08.049336   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:08.058953   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:08.059025   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:08.069034   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:08.078748   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:08.078817   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:08.089660   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.099521   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:08.099583   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.109831   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:08.120340   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:08.120400   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:08.130884   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:08.141008   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:08.275189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.006841   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.255401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.376659   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.475840   71929 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:09.475937   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:09.976926   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.476192   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.976705   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.476386   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.976459   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:08.753835   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:08.754316   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:08.754347   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:08.754264   73097 retry.go:31] will retry after 2.005810517s: waiting for machine to come up
	I0717 01:56:10.761600   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:10.762022   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:10.762053   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:10.761980   73097 retry.go:31] will retry after 2.631438855s: waiting for machine to come up
	I0717 01:56:11.327297   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:11.327348   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:10.779534   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:13.278417   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:15.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:12.476819   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:12.976633   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.476076   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.476885   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.976972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.476823   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.976917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.476765   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.976609   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.395592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:13.395949   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:13.395991   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:13.395905   73097 retry.go:31] will retry after 3.565162998s: waiting for machine to come up
	I0717 01:56:16.964948   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965424   71146 main.go:141] libmachine: (embed-certs-940222) Found IP for machine: 192.168.72.225
	I0717 01:56:16.965455   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has current primary IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965465   71146 main.go:141] libmachine: (embed-certs-940222) Reserving static IP address...
	I0717 01:56:16.966065   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.966092   71146 main.go:141] libmachine: (embed-certs-940222) DBG | skip adding static IP to network mk-embed-certs-940222 - found existing host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"}
	I0717 01:56:16.966107   71146 main.go:141] libmachine: (embed-certs-940222) Reserved static IP address: 192.168.72.225
	I0717 01:56:16.966122   71146 main.go:141] libmachine: (embed-certs-940222) Waiting for SSH to be available...
	I0717 01:56:16.966150   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Getting to WaitForSSH function...
	I0717 01:56:16.968287   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968642   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.968688   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968758   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH client type: external
	I0717 01:56:16.968782   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa (-rw-------)
	I0717 01:56:16.968842   71146 main.go:141] libmachine: (embed-certs-940222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:56:16.968872   71146 main.go:141] libmachine: (embed-certs-940222) DBG | About to run SSH command:
	I0717 01:56:16.968888   71146 main.go:141] libmachine: (embed-certs-940222) DBG | exit 0
	I0717 01:56:17.090641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | SSH cmd err, output: <nil>: 
	I0717 01:56:17.091120   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetConfigRaw
	I0717 01:56:17.091720   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.094205   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.094592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094810   71146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/config.json ...
	I0717 01:56:17.095001   71146 machine.go:94] provisionDockerMachine start ...
	I0717 01:56:17.095022   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:17.095223   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.097395   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097680   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.097707   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097848   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.098021   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098170   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098311   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.098491   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.098683   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.098695   71146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:56:17.203054   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:56:17.203080   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203364   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:56:17.203402   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.206404   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.206826   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.206868   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.207076   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.207282   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207471   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207611   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.207793   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.207985   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.207997   71146 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-940222 && echo "embed-certs-940222" | sudo tee /etc/hostname
	I0717 01:56:17.326485   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-940222
	
	I0717 01:56:17.326512   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.329226   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329629   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.329659   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329834   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.329996   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330148   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330265   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.330417   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.330619   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.330642   71146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-940222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-940222/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-940222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:56:17.439258   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:56:17.439285   71146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:56:17.439315   71146 buildroot.go:174] setting up certificates
	I0717 01:56:17.439324   71146 provision.go:84] configureAuth start
	I0717 01:56:17.439332   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.439656   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.442348   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442765   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.442796   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442976   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.445418   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.445767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.445803   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.446000   71146 provision.go:143] copyHostCerts
	I0717 01:56:17.446081   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:56:17.446098   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:56:17.446171   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:56:17.446265   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:56:17.446272   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:56:17.446292   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:56:17.446346   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:56:17.446353   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:56:17.446370   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:56:17.446418   71146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.embed-certs-940222 san=[127.0.0.1 192.168.72.225 embed-certs-940222 localhost minikube]
	I0717 01:56:17.578140   71146 provision.go:177] copyRemoteCerts
	I0717 01:56:17.578195   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:56:17.578221   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.581141   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581432   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.581457   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581697   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.581892   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.582038   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.582219   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:17.664867   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:56:17.691053   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:56:17.715816   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:56:17.742153   71146 provision.go:87] duration metric: took 302.817653ms to configureAuth
	I0717 01:56:17.742180   71146 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:56:17.742405   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:17.742486   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.745102   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745369   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.745398   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745608   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.745820   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746019   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746209   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.746510   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.746738   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.746761   71146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:56:18.017395   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:56:18.017420   71146 machine.go:97] duration metric: took 922.405002ms to provisionDockerMachine
	I0717 01:56:18.017433   71146 start.go:293] postStartSetup for "embed-certs-940222" (driver="kvm2")
	I0717 01:56:18.017449   71146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:56:18.017469   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.017817   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:56:18.017846   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.020599   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021051   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.021081   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021228   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.021410   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.021556   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.021660   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.101432   71146 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:56:18.105722   71146 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:56:18.105742   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:56:18.105797   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:56:18.105866   71146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:56:18.105944   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:56:18.115228   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:18.139857   71146 start.go:296] duration metric: took 122.411322ms for postStartSetup
	I0717 01:56:18.139924   71146 fix.go:56] duration metric: took 19.608111597s for fixHost
	I0717 01:56:18.139951   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.142466   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.142865   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.142886   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.143098   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.143262   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143444   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143662   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.143852   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:18.144022   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:18.144033   71146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:56:18.243604   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181378.218663213
	
	I0717 01:56:18.243635   71146 fix.go:216] guest clock: 1721181378.218663213
	I0717 01:56:18.243644   71146 fix.go:229] Guest: 2024-07-17 01:56:18.218663213 +0000 UTC Remote: 2024-07-17 01:56:18.139933424 +0000 UTC m=+355.354069584 (delta=78.729789ms)
	I0717 01:56:18.243662   71146 fix.go:200] guest clock delta is within tolerance: 78.729789ms
	I0717 01:56:18.243667   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 19.711916707s
	I0717 01:56:18.243684   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.243952   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:18.246454   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.246881   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.246907   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.247135   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247618   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247828   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247919   71146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:56:18.247958   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.248050   71146 ssh_runner.go:195] Run: cat /version.json
	I0717 01:56:18.248074   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.250520   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250914   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.250952   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250973   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251222   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251403   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251463   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.251495   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.251668   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251747   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.251817   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251975   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.252103   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.351600   71146 ssh_runner.go:195] Run: systemctl --version
	I0717 01:56:18.357586   71146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:56:18.503767   71146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:56:18.511637   71146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:56:18.511724   71146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:56:18.530209   71146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:56:18.530235   71146 start.go:495] detecting cgroup driver to use...
	I0717 01:56:18.530303   71146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:56:18.551740   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:56:18.566975   71146 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:56:18.567044   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:56:18.585100   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:56:18.601151   71146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:56:18.735644   71146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:56:18.895436   71146 docker.go:233] disabling docker service ...
	I0717 01:56:18.895505   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:56:18.910354   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:56:18.922999   71146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:56:19.065365   71146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:56:19.179337   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:56:19.194454   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:56:19.213281   71146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:56:19.213339   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.223531   71146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:56:19.223594   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.233691   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.243695   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.255192   71146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:56:19.266082   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.276861   71146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.295903   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.306114   71146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:56:19.316226   71146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:56:19.316275   71146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:56:19.329402   71146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:56:19.340622   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:19.456624   71146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:56:19.605945   71146 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:56:19.606051   71146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:56:19.611067   71146 start.go:563] Will wait 60s for crictl version
	I0717 01:56:19.611116   71146 ssh_runner.go:195] Run: which crictl
	I0717 01:56:19.615065   71146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:19.662925   71146 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:19.662989   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.693240   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.722332   71146 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:56:16.328318   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:16.328371   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:17.780821   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:19.780921   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:17.476562   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:17.976663   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.476958   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.976722   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.476641   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.976079   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.476899   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.976553   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.476087   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.976659   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.723930   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:19.726730   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727084   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:19.727107   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727314   71146 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:19.731814   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:19.745514   71146 kubeadm.go:883] updating cluster {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:19.745622   71146 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:56:19.745677   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:19.782922   71146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:56:19.782988   71146 ssh_runner.go:195] Run: which lz4
	I0717 01:56:19.786946   71146 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:19.791298   71146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:19.791323   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:56:21.230910   71146 crio.go:462] duration metric: took 1.443984707s to copy over tarball
	I0717 01:56:21.231003   71146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:56:21.328607   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:21.328654   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.345118   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": read tcp 192.168.61.1:36190->192.168.61.174:8443: read: connection reset by peer
	I0717 01:56:21.824753   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.825500   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:22.325079   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:22.280465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:24.779729   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:22.475994   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:22.976928   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.476906   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.975980   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.476208   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.976090   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.476425   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.976072   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.976180   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.517174   71146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.286133857s)
	I0717 01:56:23.517200   71146 crio.go:469] duration metric: took 2.286263798s to extract the tarball
	I0717 01:56:23.517210   71146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:23.554084   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:23.603831   71146 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:56:23.603861   71146 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:56:23.603871   71146 kubeadm.go:934] updating node { 192.168.72.225 8443 v1.30.2 crio true true} ...
	I0717 01:56:23.604004   71146 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-940222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:23.604087   71146 ssh_runner.go:195] Run: crio config
	I0717 01:56:23.658775   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:23.658794   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:23.658803   71146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:23.658826   71146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.225 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-940222 NodeName:embed-certs-940222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:56:23.659007   71146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-940222"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:23.659092   71146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:56:23.669971   71146 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:23.670042   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:23.680949   71146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 01:56:23.698917   71146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:23.716218   71146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 01:56:23.733971   71146 ssh_runner.go:195] Run: grep 192.168.72.225	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:23.738112   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:23.750915   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:23.894690   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:23.913418   71146 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222 for IP: 192.168.72.225
	I0717 01:56:23.913440   71146 certs.go:194] generating shared ca certs ...
	I0717 01:56:23.913456   71146 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:23.913630   71146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:23.913703   71146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:23.913729   71146 certs.go:256] generating profile certs ...
	I0717 01:56:23.913856   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/client.key
	I0717 01:56:23.913926   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key.d13a776d
	I0717 01:56:23.913968   71146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key
	I0717 01:56:23.914081   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:23.914123   71146 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:23.914134   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:23.914161   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:23.914188   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:23.914214   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:23.914256   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:23.914925   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:23.961346   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:24.006765   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:24.036852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:24.064984   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 01:56:24.090778   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:24.116146   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:24.142429   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:24.168427   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:24.193691   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:24.218852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:24.242932   71146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:24.261434   71146 ssh_runner.go:195] Run: openssl version
	I0717 01:56:24.267358   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:24.280319   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285286   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285358   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.291896   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:24.304027   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:24.315542   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320212   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320283   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.326123   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:24.339982   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:24.352301   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357023   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357078   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.363112   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:24.375910   71146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:24.380986   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:24.387276   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:24.393718   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:24.400367   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:24.406600   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:24.413161   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:24.420455   71146 kubeadm.go:392] StartCluster: {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:24.420578   71146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:24.420643   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.460702   71146 cri.go:89] found id: ""
	I0717 01:56:24.460792   71146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:24.472047   71146 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:24.472064   71146 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:24.472105   71146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:24.483092   71146 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:24.484146   71146 kubeconfig.go:125] found "embed-certs-940222" server: "https://192.168.72.225:8443"
	I0717 01:56:24.486112   71146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:24.497462   71146 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.225
	I0717 01:56:24.497496   71146 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:24.497511   71146 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:24.497571   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.541423   71146 cri.go:89] found id: ""
	I0717 01:56:24.541486   71146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:24.563272   71146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:24.574859   71146 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:24.574883   71146 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:24.574930   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:24.584960   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:24.585022   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:24.595950   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:24.605686   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:24.605775   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:24.616191   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.625954   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:24.626009   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.636254   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:24.648853   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:24.648961   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:24.660491   71146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:24.675329   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:24.795437   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:25.895383   71146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099913319s)
	I0717 01:56:25.895411   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.116274   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.286149   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.355208   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:26.355296   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.855578   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.355880   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.371616   71146 api_server.go:72] duration metric: took 1.016410291s to wait for apiserver process to appear ...
	I0717 01:56:27.371642   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:27.371671   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:27.325875   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:27.325920   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:26.780264   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.279376   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.836783   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.836811   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.836823   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.883657   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.883684   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.883695   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.895244   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.895270   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:30.371799   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.375903   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.375926   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:30.872627   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.876799   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.876830   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:31.372402   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:31.376723   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 01:56:31.382638   71146 api_server.go:141] control plane version: v1.30.2
	I0717 01:56:31.382663   71146 api_server.go:131] duration metric: took 4.011014381s to wait for apiserver health ...
	I0717 01:56:31.382672   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:31.382679   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:31.384436   71146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:27.476313   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.976700   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.476585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.976008   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.477040   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.976892   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.476912   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.976626   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.476786   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.976148   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.385974   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:31.396977   71146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:31.415740   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:31.425268   71146 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:31.425306   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:31.425313   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:56:31.425320   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:56:31.425328   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:56:31.425332   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 01:56:31.425337   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:56:31.425344   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:31.425350   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 01:56:31.425360   71146 system_pods.go:74] duration metric: took 9.598959ms to wait for pod list to return data ...
	I0717 01:56:31.425368   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:31.429053   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:31.429075   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:31.429084   71146 node_conditions.go:105] duration metric: took 3.710466ms to run NodePressure ...
	I0717 01:56:31.429098   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:31.699456   71146 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703803   71146 kubeadm.go:739] kubelet initialised
	I0717 01:56:31.703825   71146 kubeadm.go:740] duration metric: took 4.345324ms waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703835   71146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:31.708962   71146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.712850   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712871   71146 pod_ready.go:81] duration metric: took 3.888169ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.712879   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712891   71146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.717134   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717156   71146 pod_ready.go:81] duration metric: took 4.256764ms for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.717163   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717169   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.721479   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721498   71146 pod_ready.go:81] duration metric: took 4.321032ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.721508   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721515   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.819188   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819217   71146 pod_ready.go:81] duration metric: took 97.692306ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.819226   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819231   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.219730   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219766   71146 pod_ready.go:81] duration metric: took 400.526796ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.219775   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219782   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.619930   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619961   71146 pod_ready.go:81] duration metric: took 400.172543ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.619971   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619978   71146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:33.019223   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019252   71146 pod_ready.go:81] duration metric: took 399.266573ms for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:33.019263   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019271   71146 pod_ready.go:38] duration metric: took 1.315427432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:33.019291   71146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:56:33.032094   71146 ops.go:34] apiserver oom_adj: -16
	I0717 01:56:33.032116   71146 kubeadm.go:597] duration metric: took 8.56004698s to restartPrimaryControlPlane
	I0717 01:56:33.032125   71146 kubeadm.go:394] duration metric: took 8.611681052s to StartCluster
	I0717 01:56:33.032140   71146 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.032204   71146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:33.033963   71146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.034198   71146 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:56:33.034337   71146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:56:33.034405   71146 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-940222"
	I0717 01:56:33.034425   71146 addons.go:69] Setting metrics-server=true in profile "embed-certs-940222"
	I0717 01:56:33.034467   71146 addons.go:234] Setting addon metrics-server=true in "embed-certs-940222"
	W0717 01:56:33.034481   71146 addons.go:243] addon metrics-server should already be in state true
	I0717 01:56:33.034516   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034465   71146 addons.go:69] Setting default-storageclass=true in profile "embed-certs-940222"
	I0717 01:56:33.034469   71146 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-940222"
	I0717 01:56:33.034589   71146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-940222"
	W0717 01:56:33.034632   71146 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:56:33.034411   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:33.034725   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034963   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.034992   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035052   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035093   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035199   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.036051   71146 out.go:177] * Verifying Kubernetes components...
	I0717 01:56:33.037606   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:33.051343   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I0717 01:56:33.051970   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.052483   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.052516   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.052671   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0717 01:56:33.052887   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.053016   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.053397   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.053443   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.053760   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.053775   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0717 01:56:33.053779   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054125   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.054139   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.054336   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.054625   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.054656   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054984   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.055524   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.055563   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.057648   71146 addons.go:234] Setting addon default-storageclass=true in "embed-certs-940222"
	W0717 01:56:33.057668   71146 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:56:33.057699   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.058003   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.058036   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.070476   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0717 01:56:33.070717   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0717 01:56:33.071094   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071289   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071648   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071665   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.071841   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071863   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.072171   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072293   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072357   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.072581   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.073298   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0717 01:56:33.073745   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.074224   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.074237   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.074585   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.074690   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.075032   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.075054   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.075361   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.077495   71146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:56:33.077496   71146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:33.079446   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:56:33.079460   71146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:56:33.079480   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.080373   71146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.080386   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:56:33.080401   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.083272   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083527   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083623   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.083641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083899   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084099   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084168   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.084184   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.084273   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.084331   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084463   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.084748   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084890   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.085028   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.092382   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0717 01:56:33.092826   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.093401   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.093418   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.094409   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.094576   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.096442   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.096730   71146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.096750   71146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:56:33.096768   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.099802   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100290   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.100368   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100472   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.100625   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.100760   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.100849   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.229494   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:33.246459   71146 node_ready.go:35] waiting up to 6m0s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:33.400804   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:56:33.400824   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:56:33.411866   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.413220   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.426485   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:56:33.426506   71146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:56:33.476707   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:33.476729   71146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:56:33.539095   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:34.542027   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130125192s)
	I0717 01:56:34.542089   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542102   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542103   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.128853338s)
	I0717 01:56:34.542139   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542151   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542420   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542442   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542442   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542447   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542450   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542468   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542474   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542483   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542505   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542517   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542711   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542727   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542715   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542835   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542847   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.549135   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.549160   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.549405   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.549428   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616065   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076933862s)
	I0717 01:56:34.616127   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616142   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616429   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616479   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616489   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.616499   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616541   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616784   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616800   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616810   71146 addons.go:475] Verifying addon metrics-server=true in "embed-certs-940222"
	I0717 01:56:34.619698   71146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:56:32.326261   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:32.326310   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:31.779064   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:33.780671   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:32.475986   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:32.976812   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.476601   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.976667   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.476897   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.976610   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.476444   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.976859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.476092   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.976979   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.620987   71146 addons.go:510] duration metric: took 1.586659462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:56:35.250360   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.251933   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.326685   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:37.326726   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:39.977828   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:39.977860   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:39.977877   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.002499   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:40.002532   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:36.280516   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:38.779351   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:40.324290   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.329888   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.329914   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:40.824413   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.831375   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.831407   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:41.324677   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:41.333259   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 01:56:41.341378   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:56:41.341426   71603 api_server.go:131] duration metric: took 40.517438405s to wait for apiserver health ...
	I0717 01:56:41.341438   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:56:41.341447   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:41.343489   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:37.476813   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:37.976779   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.476554   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.976791   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.476946   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.976044   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.476526   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.976315   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.476688   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.976203   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.750483   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:40.249907   71146 node_ready.go:49] node "embed-certs-940222" has status "Ready":"True"
	I0717 01:56:40.249934   71146 node_ready.go:38] duration metric: took 7.003442258s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:40.249945   71146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:40.255811   71146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762773   71146 pod_ready.go:92] pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:40.762795   71146 pod_ready.go:81] duration metric: took 506.956885ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762806   71146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:42.768945   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:41.344846   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:41.360339   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:41.385845   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:41.409812   71603 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:41.409843   71603 system_pods.go:61] "coredns-5cfdc65f69-ztqz8" [7c9caec8-56b6-4faa-9410-0528f108696c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:41.409849   71603 system_pods.go:61] "etcd-no-preload-391501" [603f01a1-2b07-4d1d-be14-4da4a9f1e1b2] Running
	I0717 01:56:41.409854   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [7733c5b6-5e30-472b-920d-3849f2849f7b] Running
	I0717 01:56:41.409860   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [c1afab7e-9b46-4940-94ec-e62ebc10f406] Running
	I0717 01:56:41.409865   71603 system_pods.go:61] "kube-proxy-zbqhw" [26056c12-35cd-4a3e-b40a-1eca055bd1e2] Running
	I0717 01:56:41.409869   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [98f81994-9d2a-45b8-9719-90e181ee5d6f] Running
	I0717 01:56:41.409877   71603 system_pods.go:61] "metrics-server-78fcd8795b-g9x96" [86a6a2c3-ae04-486d-9751-0cc801f9fbfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:41.409887   71603 system_pods.go:61] "storage-provisioner" [8b938905-d8e1-4129-8426-5e31a05d38db] Running
	I0717 01:56:41.409895   71603 system_pods.go:74] duration metric: took 24.018074ms to wait for pod list to return data ...
	I0717 01:56:41.409906   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:41.418825   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:41.418856   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:41.418868   71603 node_conditions.go:105] duration metric: took 8.953821ms to run NodePressure ...
	I0717 01:56:41.418892   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:41.713730   71603 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:41.719162   71603 retry.go:31] will retry after 180.435127ms: kubelet not initialised
	I0717 01:56:41.906299   71603 retry.go:31] will retry after 320.946038ms: kubelet not initialised
	I0717 01:56:42.232875   71603 retry.go:31] will retry after 423.072333ms: kubelet not initialised
	I0717 01:56:42.661412   71603 retry.go:31] will retry after 1.138026932s: kubelet not initialised
	I0717 01:56:43.809525   71603 retry.go:31] will retry after 1.187704503s: kubelet not initialised
	I0717 01:56:45.009815   71603 kubeadm.go:739] kubelet initialised
	I0717 01:56:45.009839   71603 kubeadm.go:740] duration metric: took 3.296082732s waiting for restarted kubelet to initialise ...
	I0717 01:56:45.009850   71603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:45.021149   71603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.780159   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:43.279699   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:45.280407   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:42.476301   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:42.976939   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.477021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.976910   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.476766   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.976415   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.476987   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.976666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.476735   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.976643   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.770078   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.269496   71146 pod_ready.go:92] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.269524   71146 pod_ready.go:81] duration metric: took 6.506711113s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.269538   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277267   71146 pod_ready.go:92] pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.277294   71146 pod_ready.go:81] duration metric: took 7.747271ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277309   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286697   71146 pod_ready.go:92] pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.286715   71146 pod_ready.go:81] duration metric: took 9.397698ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286723   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291876   71146 pod_ready.go:92] pod "kube-proxy-l58xk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.291897   71146 pod_ready.go:81] duration metric: took 5.168432ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291905   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296201   71146 pod_ready.go:92] pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.296215   71146 pod_ready.go:81] duration metric: took 4.304055ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296222   71146 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.027495   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:49.028127   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.779497   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:50.279065   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.476576   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:47.976502   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.476634   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.976299   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.476069   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.976086   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.476859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.976441   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.476217   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.976585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.303729   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.802778   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.029194   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:53.528363   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.778915   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:54.780173   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.476652   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:52.976136   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.976168   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.476176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.976049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.476464   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.976802   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.308491   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:56.802797   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:55.528547   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.533612   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.030406   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.278908   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:59.279393   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.476661   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:57.976021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.976940   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.476773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.976397   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.476591   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.976189   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.476917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.976263   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.806045   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.807112   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.529203   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.028677   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:01.779903   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:03.780163   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.476048   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:02.976019   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.476604   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.976602   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.477004   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.976726   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.476934   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.975985   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.476331   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.976185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.302031   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.303601   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.803763   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.528021   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:09.528499   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.780204   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:08.279630   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.476887   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:07.975972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.476034   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.976678   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:09.476927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:09.477010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:09.513328   71929 cri.go:89] found id: ""
	I0717 01:57:09.513352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.513361   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:09.513368   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:09.513418   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:09.551203   71929 cri.go:89] found id: ""
	I0717 01:57:09.551228   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.551237   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:09.551244   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:09.551308   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:09.585321   71929 cri.go:89] found id: ""
	I0717 01:57:09.585352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.585363   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:09.585370   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:09.585427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:09.623977   71929 cri.go:89] found id: ""
	I0717 01:57:09.624004   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.624012   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:09.624019   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:09.624078   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:09.663338   71929 cri.go:89] found id: ""
	I0717 01:57:09.663367   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.663374   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:09.663380   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:09.663425   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:09.696381   71929 cri.go:89] found id: ""
	I0717 01:57:09.696412   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.696423   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:09.696436   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:09.696482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:09.735892   71929 cri.go:89] found id: ""
	I0717 01:57:09.735922   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.735932   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:09.735944   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:09.736006   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:09.775878   71929 cri.go:89] found id: ""
	I0717 01:57:09.775909   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.775919   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:09.775929   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:09.775942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:09.830021   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:09.830057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:09.844753   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:09.844783   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:09.985140   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:09.985165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:09.985179   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:10.049946   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:10.049984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:10.310038   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.805565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:11.529122   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:14.028939   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:10.779935   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:13.278388   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:15.280027   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.592959   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:12.608385   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:12.608467   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:12.649900   71929 cri.go:89] found id: ""
	I0717 01:57:12.649931   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.649942   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:12.649950   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:12.650021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:12.684915   71929 cri.go:89] found id: ""
	I0717 01:57:12.684941   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.684948   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:12.684956   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:12.685010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:12.727718   71929 cri.go:89] found id: ""
	I0717 01:57:12.727758   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.727766   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:12.727788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:12.727864   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:12.767212   71929 cri.go:89] found id: ""
	I0717 01:57:12.767236   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.767244   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:12.767249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:12.767295   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:12.806301   71929 cri.go:89] found id: ""
	I0717 01:57:12.806320   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.806327   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:12.806332   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:12.806405   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:12.843118   71929 cri.go:89] found id: ""
	I0717 01:57:12.843151   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.843162   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:12.843170   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:12.843245   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:12.876671   71929 cri.go:89] found id: ""
	I0717 01:57:12.876697   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.876707   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:12.876714   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:12.876790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:12.916201   71929 cri.go:89] found id: ""
	I0717 01:57:12.916226   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.916232   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:12.916240   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:12.916250   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:12.970346   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:12.970385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:12.985029   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:12.985053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:13.068314   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:13.068340   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:13.068352   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:13.147862   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:13.147897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:15.703130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:15.717081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:15.717160   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:15.757513   71929 cri.go:89] found id: ""
	I0717 01:57:15.757538   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.757545   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:15.757552   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:15.757599   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:15.794185   71929 cri.go:89] found id: ""
	I0717 01:57:15.794218   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.794231   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:15.794238   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:15.794300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:15.830589   71929 cri.go:89] found id: ""
	I0717 01:57:15.830619   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.830628   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:15.830634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:15.830694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:15.869673   71929 cri.go:89] found id: ""
	I0717 01:57:15.869702   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.869713   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:15.869720   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:15.869782   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:15.909225   71929 cri.go:89] found id: ""
	I0717 01:57:15.909257   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.909267   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:15.909278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:15.909343   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:15.944389   71929 cri.go:89] found id: ""
	I0717 01:57:15.944417   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.944424   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:15.944430   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:15.944490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:15.982871   71929 cri.go:89] found id: ""
	I0717 01:57:15.982898   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.982907   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:15.982915   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:15.982983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:16.025674   71929 cri.go:89] found id: ""
	I0717 01:57:16.025701   71929 logs.go:276] 0 containers: []
	W0717 01:57:16.025711   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:16.025721   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:16.025736   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:16.111608   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:16.111627   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:16.111638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:16.184650   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:16.184689   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:16.230647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:16.230693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:16.286675   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:16.286710   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:15.303141   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.304891   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:16.029794   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.529463   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.780034   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:20.279882   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.802487   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:18.817483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:18.817562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:18.861623   71929 cri.go:89] found id: ""
	I0717 01:57:18.861653   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.861664   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:18.861671   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:18.861733   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:18.901335   71929 cri.go:89] found id: ""
	I0717 01:57:18.901359   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.901367   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:18.901372   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:18.901427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:18.936477   71929 cri.go:89] found id: ""
	I0717 01:57:18.936508   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.936518   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:18.936524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:18.936581   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:18.971056   71929 cri.go:89] found id: ""
	I0717 01:57:18.971087   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.971098   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:18.971106   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:18.971157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:19.005399   71929 cri.go:89] found id: ""
	I0717 01:57:19.005431   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.005453   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:19.005460   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:19.005525   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:19.040218   71929 cri.go:89] found id: ""
	I0717 01:57:19.040242   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.040250   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:19.040257   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:19.040317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:19.073365   71929 cri.go:89] found id: ""
	I0717 01:57:19.073392   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.073402   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:19.073409   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:19.073471   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:19.108670   71929 cri.go:89] found id: ""
	I0717 01:57:19.108701   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.108713   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:19.108725   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:19.108743   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:19.186077   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:19.186111   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.232181   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:19.232214   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:19.288713   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:19.288755   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:19.303089   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:19.303115   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:19.386372   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:21.886666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:21.900905   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:21.900966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:21.934955   71929 cri.go:89] found id: ""
	I0717 01:57:21.934979   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.934987   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:21.934993   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:21.935036   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:21.972180   71929 cri.go:89] found id: ""
	I0717 01:57:21.972203   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.972211   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:21.972217   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:21.972271   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:22.010452   71929 cri.go:89] found id: ""
	I0717 01:57:22.010479   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.010487   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:22.010493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:22.010547   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:22.045824   71929 cri.go:89] found id: ""
	I0717 01:57:22.045888   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.045902   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:22.045911   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:22.045984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:22.084734   71929 cri.go:89] found id: ""
	I0717 01:57:22.084760   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.084769   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:22.084774   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:22.084842   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:22.119808   71929 cri.go:89] found id: ""
	I0717 01:57:22.119838   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.119846   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:22.119852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:22.119910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:22.157537   71929 cri.go:89] found id: ""
	I0717 01:57:22.157583   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.157610   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:22.157620   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:22.157687   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:22.196021   71929 cri.go:89] found id: ""
	I0717 01:57:22.196052   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.196062   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:22.196079   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:22.196094   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:22.274350   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:22.274373   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:22.274386   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:22.364363   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:22.364401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.803506   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.306698   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:21.028767   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:23.527943   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:24.529027   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.529064   71603 pod_ready.go:81] duration metric: took 39.50788355s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.529078   71603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534655   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.534680   71603 pod_ready.go:81] duration metric: took 5.594492ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534691   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539602   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.539622   71603 pod_ready.go:81] duration metric: took 4.923891ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539631   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544475   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.544516   71603 pod_ready.go:81] duration metric: took 4.862078ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544532   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549173   71603 pod_ready.go:92] pod "kube-proxy-zbqhw" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.549193   71603 pod_ready.go:81] duration metric: took 4.653986ms for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549203   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925916   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.925944   71603 pod_ready.go:81] duration metric: took 376.73343ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925959   71603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:22.779802   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:25.280281   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.410052   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:22.410092   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:22.462289   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:22.462326   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.978560   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:24.992533   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:24.992601   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:25.027708   71929 cri.go:89] found id: ""
	I0717 01:57:25.027746   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.027754   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:25.027760   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:25.027809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:25.066946   71929 cri.go:89] found id: ""
	I0717 01:57:25.066974   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.066985   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:25.066992   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:25.067051   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:25.107209   71929 cri.go:89] found id: ""
	I0717 01:57:25.107238   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.107248   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:25.107254   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:25.107300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:25.141548   71929 cri.go:89] found id: ""
	I0717 01:57:25.141577   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.141587   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:25.141594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:25.141652   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:25.175822   71929 cri.go:89] found id: ""
	I0717 01:57:25.175853   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.175861   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:25.175866   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:25.175917   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:25.215672   71929 cri.go:89] found id: ""
	I0717 01:57:25.215705   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.215718   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:25.215726   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:25.215786   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:25.260392   71929 cri.go:89] found id: ""
	I0717 01:57:25.260422   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.260434   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:25.260442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:25.260510   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:25.309953   71929 cri.go:89] found id: ""
	I0717 01:57:25.309981   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.309990   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:25.309999   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:25.310013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:25.414204   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:25.414229   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:25.414244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:25.501849   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:25.501883   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:25.545129   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:25.545163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:25.599948   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:25.599984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.803870   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.302993   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:26.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.932999   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.280455   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:29.778817   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.115776   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:28.129710   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:28.129776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:28.165380   71929 cri.go:89] found id: ""
	I0717 01:57:28.165409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.165419   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:28.165425   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:28.165473   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:28.199225   71929 cri.go:89] found id: ""
	I0717 01:57:28.199251   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.199259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:28.199264   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:28.199314   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:28.235564   71929 cri.go:89] found id: ""
	I0717 01:57:28.235585   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.235593   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:28.235598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:28.235649   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:28.270377   71929 cri.go:89] found id: ""
	I0717 01:57:28.270409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.270427   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:28.270435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:28.270488   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:28.310132   71929 cri.go:89] found id: ""
	I0717 01:57:28.310156   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.310163   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:28.310168   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:28.310222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:28.347590   71929 cri.go:89] found id: ""
	I0717 01:57:28.347619   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.347630   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:28.347638   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:28.347696   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:28.387953   71929 cri.go:89] found id: ""
	I0717 01:57:28.387988   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.388001   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:28.388010   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:28.388072   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:28.428788   71929 cri.go:89] found id: ""
	I0717 01:57:28.428811   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.428818   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:28.428826   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:28.428838   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:28.487411   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:28.487465   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:28.501121   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:28.501152   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:28.576296   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:28.576320   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:28.576335   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:28.660246   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:28.660288   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:31.201238   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:31.221132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:31.221192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:31.279839   71929 cri.go:89] found id: ""
	I0717 01:57:31.279867   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.279876   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:31.279884   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:31.279943   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:31.359764   71929 cri.go:89] found id: ""
	I0717 01:57:31.359796   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.359807   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:31.359814   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:31.359873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:31.397045   71929 cri.go:89] found id: ""
	I0717 01:57:31.397077   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.397087   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:31.397094   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:31.397157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:31.441356   71929 cri.go:89] found id: ""
	I0717 01:57:31.441388   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.441397   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:31.441404   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:31.441459   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:31.484014   71929 cri.go:89] found id: ""
	I0717 01:57:31.484040   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.484053   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:31.484060   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:31.484124   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:31.520686   71929 cri.go:89] found id: ""
	I0717 01:57:31.520714   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.520725   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:31.520733   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:31.520792   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:31.557300   71929 cri.go:89] found id: ""
	I0717 01:57:31.557326   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.557334   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:31.557339   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:31.557387   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:31.597753   71929 cri.go:89] found id: ""
	I0717 01:57:31.597782   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.597792   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:31.597804   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:31.597818   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:31.656796   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:31.656837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:31.671287   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:31.671311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:31.742752   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:31.742772   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:31.742784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:31.828154   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:31.828186   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:29.303279   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.303332   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.434410   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.778853   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.780535   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:34.368947   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:34.384323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:34.384402   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:34.421138   71929 cri.go:89] found id: ""
	I0717 01:57:34.421171   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.421182   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:34.421190   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:34.421263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:34.459077   71929 cri.go:89] found id: ""
	I0717 01:57:34.459105   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.459116   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:34.459123   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:34.459180   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:34.492987   71929 cri.go:89] found id: ""
	I0717 01:57:34.493016   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.493027   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:34.493038   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:34.493098   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:34.527801   71929 cri.go:89] found id: ""
	I0717 01:57:34.527827   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.527836   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:34.527841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:34.527890   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:34.562877   71929 cri.go:89] found id: ""
	I0717 01:57:34.562904   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.562914   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:34.562921   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:34.562981   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:34.599387   71929 cri.go:89] found id: ""
	I0717 01:57:34.599409   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.599417   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:34.599423   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:34.599479   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:34.636087   71929 cri.go:89] found id: ""
	I0717 01:57:34.636118   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.636126   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:34.636132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:34.636194   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:34.673168   71929 cri.go:89] found id: ""
	I0717 01:57:34.673196   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.673206   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:34.673214   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:34.673226   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:34.712833   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:34.712864   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:34.765926   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:34.765959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:34.780024   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:34.780049   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:34.863080   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:34.863106   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:34.863122   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:33.803621   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.306114   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:35.933050   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.432520   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.280143   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.779168   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:37.446644   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:37.463015   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:37.463090   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:37.499563   71929 cri.go:89] found id: ""
	I0717 01:57:37.499592   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.499601   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:37.499607   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:37.499663   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:37.538516   71929 cri.go:89] found id: ""
	I0717 01:57:37.538543   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.538572   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:37.538579   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:37.538638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:37.577032   71929 cri.go:89] found id: ""
	I0717 01:57:37.577061   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.577068   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:37.577074   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:37.577129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:37.613534   71929 cri.go:89] found id: ""
	I0717 01:57:37.613563   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.613574   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:37.613582   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:37.613646   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:37.651346   71929 cri.go:89] found id: ""
	I0717 01:57:37.651370   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.651381   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:37.651389   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:37.651451   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:37.685949   71929 cri.go:89] found id: ""
	I0717 01:57:37.685989   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.686001   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:37.686008   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:37.686068   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:37.721706   71929 cri.go:89] found id: ""
	I0717 01:57:37.721744   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.721752   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:37.721759   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:37.721812   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:37.758948   71929 cri.go:89] found id: ""
	I0717 01:57:37.758976   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.758985   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:37.758994   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:37.759005   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:37.835305   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:37.835334   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:37.835349   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:37.916627   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:37.916660   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:37.956819   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:37.956851   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.007596   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:38.007641   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.522573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:40.536850   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:40.536924   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:40.576172   71929 cri.go:89] found id: ""
	I0717 01:57:40.576200   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.576211   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:40.576218   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:40.576277   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:40.611926   71929 cri.go:89] found id: ""
	I0717 01:57:40.611958   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.611969   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:40.611976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:40.612039   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:40.647225   71929 cri.go:89] found id: ""
	I0717 01:57:40.647251   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.647259   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:40.647265   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:40.647315   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:40.683871   71929 cri.go:89] found id: ""
	I0717 01:57:40.683902   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.683917   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:40.683925   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:40.683999   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:40.720941   71929 cri.go:89] found id: ""
	I0717 01:57:40.720971   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.720982   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:40.720989   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:40.721053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:40.756695   71929 cri.go:89] found id: ""
	I0717 01:57:40.756728   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.756739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:40.756746   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:40.756801   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:40.794181   71929 cri.go:89] found id: ""
	I0717 01:57:40.794214   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.794221   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:40.794226   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:40.794281   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:40.830361   71929 cri.go:89] found id: ""
	I0717 01:57:40.830396   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.830407   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:40.830417   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:40.830436   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.844827   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:40.844849   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:40.913003   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:40.913021   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:40.913035   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:40.996314   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:40.996348   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:41.041120   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:41.041151   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.801850   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.802727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:42.802814   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.934130   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.432799   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.780350   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.279971   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.593226   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:43.606395   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:43.606461   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:43.646260   71929 cri.go:89] found id: ""
	I0717 01:57:43.646290   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.646302   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:43.646310   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:43.646368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:43.681148   71929 cri.go:89] found id: ""
	I0717 01:57:43.681174   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.681182   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:43.681189   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:43.681250   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:43.716568   71929 cri.go:89] found id: ""
	I0717 01:57:43.716595   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.716606   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:43.716613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:43.716675   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:43.750507   71929 cri.go:89] found id: ""
	I0717 01:57:43.750536   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.750558   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:43.750566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:43.750627   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:43.787207   71929 cri.go:89] found id: ""
	I0717 01:57:43.787234   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.787244   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:43.787251   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:43.787311   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:43.822997   71929 cri.go:89] found id: ""
	I0717 01:57:43.823034   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.823045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:43.823052   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:43.823118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:43.860605   71929 cri.go:89] found id: ""
	I0717 01:57:43.860632   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.860640   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:43.860646   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:43.860702   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:43.897419   71929 cri.go:89] found id: ""
	I0717 01:57:43.897451   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.897463   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:43.897473   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:43.897492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:43.956361   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:43.956393   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:43.971077   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:43.971104   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:44.045234   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:44.045258   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:44.045275   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:44.122508   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:44.122544   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:46.660516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:46.675555   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:46.675651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:46.709264   71929 cri.go:89] found id: ""
	I0717 01:57:46.709291   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.709300   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:46.709306   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:46.709362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:46.744865   71929 cri.go:89] found id: ""
	I0717 01:57:46.744898   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.744908   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:46.744915   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:46.744971   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:46.785837   71929 cri.go:89] found id: ""
	I0717 01:57:46.785860   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.785870   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:46.785878   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:46.785932   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:46.828801   71929 cri.go:89] found id: ""
	I0717 01:57:46.828832   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.828842   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:46.828849   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:46.828907   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:46.863122   71929 cri.go:89] found id: ""
	I0717 01:57:46.863151   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.863162   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:46.863175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:46.863232   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:46.900705   71929 cri.go:89] found id: ""
	I0717 01:57:46.900731   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.900739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:46.900744   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:46.900790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:46.935774   71929 cri.go:89] found id: ""
	I0717 01:57:46.935816   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.935829   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:46.935840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:46.935895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:46.969274   71929 cri.go:89] found id: ""
	I0717 01:57:46.969304   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.969315   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:46.969325   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:46.969339   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:47.040318   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:47.040343   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:47.040358   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:47.119920   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:47.119954   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:47.168818   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:47.168847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:47.221983   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:47.222034   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:45.303812   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.304051   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.433020   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.932755   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.936075   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.780328   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.781850   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.736564   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:49.749966   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:49.750025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:49.788294   71929 cri.go:89] found id: ""
	I0717 01:57:49.788321   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.788332   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:49.788339   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:49.788396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:49.826406   71929 cri.go:89] found id: ""
	I0717 01:57:49.826431   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.826440   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:49.826445   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:49.826491   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:49.864978   71929 cri.go:89] found id: ""
	I0717 01:57:49.865005   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.865015   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:49.865020   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:49.865074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:49.901238   71929 cri.go:89] found id: ""
	I0717 01:57:49.901270   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.901281   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:49.901300   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:49.901366   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:49.937035   71929 cri.go:89] found id: ""
	I0717 01:57:49.937058   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.937065   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:49.937070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:49.937207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:49.977793   71929 cri.go:89] found id: ""
	I0717 01:57:49.977816   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.977823   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:49.977828   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:49.977873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:50.012915   71929 cri.go:89] found id: ""
	I0717 01:57:50.012942   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.012952   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:50.012959   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:50.013025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:50.049085   71929 cri.go:89] found id: ""
	I0717 01:57:50.049115   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.049127   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:50.049138   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:50.049156   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:50.087521   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:50.087549   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:50.140934   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:50.140978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:50.156001   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:50.156033   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:50.231780   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:50.231811   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:50.231835   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:49.802916   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:51.803036   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.432307   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.432384   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.278585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.279641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.810064   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:52.823442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:52.823508   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:52.860753   71929 cri.go:89] found id: ""
	I0717 01:57:52.860778   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.860789   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:52.860797   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:52.860852   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:52.896264   71929 cri.go:89] found id: ""
	I0717 01:57:52.896289   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.896297   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:52.896303   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:52.896349   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:52.932613   71929 cri.go:89] found id: ""
	I0717 01:57:52.932640   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.932649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:52.932657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:52.932722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:52.969691   71929 cri.go:89] found id: ""
	I0717 01:57:52.969720   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.969728   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:52.969734   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:52.969788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:53.007039   71929 cri.go:89] found id: ""
	I0717 01:57:53.007067   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.007075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:53.007081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:53.007135   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:53.047736   71929 cri.go:89] found id: ""
	I0717 01:57:53.047762   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.047772   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:53.047778   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:53.047838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:53.083192   71929 cri.go:89] found id: ""
	I0717 01:57:53.083216   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.083225   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:53.083230   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:53.083276   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:53.118509   71929 cri.go:89] found id: ""
	I0717 01:57:53.118536   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.118545   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:53.118564   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:53.118589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:53.203003   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:53.203039   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:53.244602   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:53.244627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:53.295180   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:53.295216   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:53.310777   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:53.310805   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:53.389412   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:55.890450   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:55.903768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:55.903843   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:55.944148   71929 cri.go:89] found id: ""
	I0717 01:57:55.944171   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.944179   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:55.944185   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:55.944231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:55.979945   71929 cri.go:89] found id: ""
	I0717 01:57:55.979970   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.979980   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:55.979987   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:55.980045   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:56.019057   71929 cri.go:89] found id: ""
	I0717 01:57:56.019089   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.019100   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:56.019107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:56.019162   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:56.054343   71929 cri.go:89] found id: ""
	I0717 01:57:56.054369   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.054378   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:56.054383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:56.054434   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:56.091150   71929 cri.go:89] found id: ""
	I0717 01:57:56.091179   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.091189   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:56.091197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:56.091256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:56.127502   71929 cri.go:89] found id: ""
	I0717 01:57:56.127528   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.127538   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:56.127547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:56.127602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:56.167935   71929 cri.go:89] found id: ""
	I0717 01:57:56.167961   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.167972   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:56.167979   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:56.168048   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:56.209501   71929 cri.go:89] found id: ""
	I0717 01:57:56.209527   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.209537   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:56.209547   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:56.209561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:56.257989   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:56.258023   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:56.272491   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:56.272519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:56.361622   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:56.361653   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:56.361668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:56.442953   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:56.442992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:54.302376   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.303297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.933123   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.933242   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.280399   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.779285   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.983914   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:58.997215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:58.997292   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:59.032937   71929 cri.go:89] found id: ""
	I0717 01:57:59.032964   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.032980   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:59.032996   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:59.033057   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:59.067790   71929 cri.go:89] found id: ""
	I0717 01:57:59.067811   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.067819   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:59.067825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:59.067881   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:59.107659   71929 cri.go:89] found id: ""
	I0717 01:57:59.107689   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.107699   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:59.107705   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:59.107754   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:59.150134   71929 cri.go:89] found id: ""
	I0717 01:57:59.150158   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.150168   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:59.150175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:59.150235   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:59.192351   71929 cri.go:89] found id: ""
	I0717 01:57:59.192381   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.192391   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:59.192398   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:59.192460   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:59.228177   71929 cri.go:89] found id: ""
	I0717 01:57:59.228202   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.228209   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:59.228215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:59.228261   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:59.267016   71929 cri.go:89] found id: ""
	I0717 01:57:59.267043   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.267052   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:59.267058   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:59.267109   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:59.302235   71929 cri.go:89] found id: ""
	I0717 01:57:59.302257   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.302263   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:59.302273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:59.302285   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:59.368453   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:59.368492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:59.383375   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:59.383399   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:59.454946   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:59.454975   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:59.454992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:59.539576   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:59.539609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:02.085516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:02.099848   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:02.099909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:02.136835   71929 cri.go:89] found id: ""
	I0717 01:58:02.136859   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.136867   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:02.136872   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:02.136928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:02.175304   71929 cri.go:89] found id: ""
	I0717 01:58:02.175331   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.175338   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:02.175344   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:02.175389   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:02.210922   71929 cri.go:89] found id: ""
	I0717 01:58:02.210947   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.210955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:02.210961   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:02.211018   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:02.246952   71929 cri.go:89] found id: ""
	I0717 01:58:02.246983   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.246992   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:02.246999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:02.247053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:02.284857   71929 cri.go:89] found id: ""
	I0717 01:58:02.284883   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.284892   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:02.284897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:02.284944   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:02.322941   71929 cri.go:89] found id: ""
	I0717 01:58:02.322978   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.322999   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:02.323007   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:02.323065   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:02.357904   71929 cri.go:89] found id: ""
	I0717 01:58:02.357932   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.357943   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:02.357950   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:02.358012   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:02.392291   71929 cri.go:89] found id: ""
	I0717 01:58:02.392315   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.392322   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:02.392331   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:02.392346   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:58.802622   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.303663   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.433212   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:03.433962   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:00.779479   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.779619   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.279590   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.447670   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:02.447704   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:02.462259   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:02.462284   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:02.534304   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:02.534332   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:02.534347   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:02.612757   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:02.612799   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.153573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:05.166702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:05.166775   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:05.205213   71929 cri.go:89] found id: ""
	I0717 01:58:05.205238   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.205247   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:05.205252   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:05.205305   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:05.242021   71929 cri.go:89] found id: ""
	I0717 01:58:05.242048   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.242057   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:05.242063   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:05.242118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:05.281862   71929 cri.go:89] found id: ""
	I0717 01:58:05.281889   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.281900   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:05.281908   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:05.281967   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:05.318125   71929 cri.go:89] found id: ""
	I0717 01:58:05.318157   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.318169   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:05.318177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:05.318244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:05.352470   71929 cri.go:89] found id: ""
	I0717 01:58:05.352504   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.352516   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:05.352524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:05.352595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:05.386692   71929 cri.go:89] found id: ""
	I0717 01:58:05.386722   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.386733   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:05.386741   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:05.386803   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:05.426676   71929 cri.go:89] found id: ""
	I0717 01:58:05.426731   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.426744   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:05.426751   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:05.426811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:05.467974   71929 cri.go:89] found id: ""
	I0717 01:58:05.468000   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.468010   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:05.468020   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:05.468036   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.506769   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:05.506797   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:05.561745   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:05.561782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:05.576743   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:05.576775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:05.652856   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:05.652887   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:05.652903   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:03.304109   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.803632   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.434411   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.931796   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.932902   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.779196   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.779591   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:08.244185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:08.257343   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:08.257420   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:08.297136   71929 cri.go:89] found id: ""
	I0717 01:58:08.297163   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.297174   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:08.297181   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:08.297237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:08.336099   71929 cri.go:89] found id: ""
	I0717 01:58:08.336121   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.336129   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:08.336135   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:08.336185   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:08.369668   71929 cri.go:89] found id: ""
	I0717 01:58:08.369690   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.369698   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:08.369706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:08.369756   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:08.405140   71929 cri.go:89] found id: ""
	I0717 01:58:08.405171   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.405179   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:08.405186   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:08.405249   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:08.446296   71929 cri.go:89] found id: ""
	I0717 01:58:08.446319   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.446326   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:08.446331   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:08.446377   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:08.483004   71929 cri.go:89] found id: ""
	I0717 01:58:08.483042   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.483062   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:08.483070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:08.483139   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:08.520668   71929 cri.go:89] found id: ""
	I0717 01:58:08.520699   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.520710   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:08.520717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:08.520776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:08.554711   71929 cri.go:89] found id: ""
	I0717 01:58:08.554734   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.554744   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:08.554752   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:08.554763   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:08.606972   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:08.607004   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:08.621102   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:08.621134   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:08.690424   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:08.690443   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:08.690454   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:08.775151   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:08.775193   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:11.318471   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:11.331875   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:11.331954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:11.375766   71929 cri.go:89] found id: ""
	I0717 01:58:11.375787   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.375795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:11.375801   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:11.375863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:11.417043   71929 cri.go:89] found id: ""
	I0717 01:58:11.417080   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.417103   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:11.417111   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:11.417169   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:11.459462   71929 cri.go:89] found id: ""
	I0717 01:58:11.459487   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.459495   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:11.459500   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:11.459551   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:11.516500   71929 cri.go:89] found id: ""
	I0717 01:58:11.516525   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.516533   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:11.516539   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:11.516590   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:11.573916   71929 cri.go:89] found id: ""
	I0717 01:58:11.573961   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.575159   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:11.575201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:11.575275   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:11.619446   71929 cri.go:89] found id: ""
	I0717 01:58:11.619477   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.619489   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:11.619497   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:11.619558   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:11.654766   71929 cri.go:89] found id: ""
	I0717 01:58:11.654793   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.654802   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:11.654807   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:11.654859   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:11.690306   71929 cri.go:89] found id: ""
	I0717 01:58:11.690335   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.690346   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:11.690354   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:11.690366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:11.744470   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:11.744516   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:11.758824   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:11.758856   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:11.841028   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:11.841058   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:11.841076   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:11.923299   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:11.923351   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:08.303010   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:10.303678   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.803090   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:11.933148   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.433109   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.280292   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.281580   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.466666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:14.479676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:14.479740   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:14.517890   71929 cri.go:89] found id: ""
	I0717 01:58:14.517919   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.517931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:14.517938   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:14.517998   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:14.552891   71929 cri.go:89] found id: ""
	I0717 01:58:14.552918   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.552926   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:14.552931   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:14.552992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:14.593571   71929 cri.go:89] found id: ""
	I0717 01:58:14.593596   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.593604   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:14.593609   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:14.593662   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:14.628869   71929 cri.go:89] found id: ""
	I0717 01:58:14.628897   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.628907   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:14.628913   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:14.628972   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:14.663558   71929 cri.go:89] found id: ""
	I0717 01:58:14.663586   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.663593   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:14.663599   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:14.663644   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:14.700788   71929 cri.go:89] found id: ""
	I0717 01:58:14.700824   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.700834   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:14.700843   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:14.700903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:14.737975   71929 cri.go:89] found id: ""
	I0717 01:58:14.738014   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.738025   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:14.738032   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:14.738091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:14.775419   71929 cri.go:89] found id: ""
	I0717 01:58:14.775443   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.775453   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:14.775465   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:14.775479   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:14.817635   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:14.817661   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:14.870667   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:14.870705   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:14.885208   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:14.885235   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:14.962286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:14.962318   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:14.962334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:14.803624   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.303944   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.434108   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.934577   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.779538   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.780694   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.537546   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:17.550258   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:17.550322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:17.586251   71929 cri.go:89] found id: ""
	I0717 01:58:17.586278   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.586286   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:17.586292   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:17.586348   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:17.620903   71929 cri.go:89] found id: ""
	I0717 01:58:17.620927   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.620935   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:17.620941   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:17.620992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:17.659292   71929 cri.go:89] found id: ""
	I0717 01:58:17.659319   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.659328   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:17.659334   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:17.659384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:17.695603   71929 cri.go:89] found id: ""
	I0717 01:58:17.695632   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.695642   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:17.695650   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:17.695711   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:17.731943   71929 cri.go:89] found id: ""
	I0717 01:58:17.731970   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.731978   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:17.731984   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:17.732041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:17.767257   71929 cri.go:89] found id: ""
	I0717 01:58:17.767284   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.767293   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:17.767299   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:17.767357   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:17.802455   71929 cri.go:89] found id: ""
	I0717 01:58:17.802495   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.802508   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:17.802516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:17.802602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:17.839321   71929 cri.go:89] found id: ""
	I0717 01:58:17.839351   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.839362   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:17.839374   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:17.839391   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:17.912269   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:17.912295   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:17.912311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:17.990005   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:17.990038   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:18.029933   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:18.029960   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:18.081941   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:18.081977   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:20.597325   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:20.611835   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:20.611901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:20.647899   71929 cri.go:89] found id: ""
	I0717 01:58:20.647922   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.647931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:20.647936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:20.647984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:20.683783   71929 cri.go:89] found id: ""
	I0717 01:58:20.683816   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.683827   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:20.683834   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:20.683892   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:20.721803   71929 cri.go:89] found id: ""
	I0717 01:58:20.721833   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.721844   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:20.721851   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:20.721910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:20.756148   71929 cri.go:89] found id: ""
	I0717 01:58:20.756177   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.756189   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:20.756196   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:20.756259   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:20.795976   71929 cri.go:89] found id: ""
	I0717 01:58:20.796014   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.796028   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:20.796036   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:20.796095   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:20.833775   71929 cri.go:89] found id: ""
	I0717 01:58:20.833805   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.833816   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:20.833824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:20.833891   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:20.869138   71929 cri.go:89] found id: ""
	I0717 01:58:20.869163   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.869173   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:20.869180   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:20.869237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:20.904865   71929 cri.go:89] found id: ""
	I0717 01:58:20.904893   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.904901   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:20.904910   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:20.904920   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:20.947268   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:20.947294   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:20.998541   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:20.998582   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:21.013797   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:21.013828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:21.085101   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:21.085127   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:21.085141   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:19.804949   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:22.304273   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.436176   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.933548   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.279177   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.279599   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:25.279899   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.667361   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:23.681768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:23.681828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:23.717721   71929 cri.go:89] found id: ""
	I0717 01:58:23.717748   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.717757   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:23.717763   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:23.717827   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:23.752699   71929 cri.go:89] found id: ""
	I0717 01:58:23.752728   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.752738   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:23.752745   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:23.752809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:23.790914   71929 cri.go:89] found id: ""
	I0717 01:58:23.790944   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.790955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:23.790962   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:23.791021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:23.827253   71929 cri.go:89] found id: ""
	I0717 01:58:23.827276   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.827285   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:23.827338   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:23.827392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:23.864466   71929 cri.go:89] found id: ""
	I0717 01:58:23.864510   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.864520   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:23.864527   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:23.864577   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:23.900734   71929 cri.go:89] found id: ""
	I0717 01:58:23.900775   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.900786   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:23.900794   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:23.900855   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:23.937212   71929 cri.go:89] found id: ""
	I0717 01:58:23.937236   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.937243   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:23.937249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:23.937304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:23.973730   71929 cri.go:89] found id: ""
	I0717 01:58:23.973755   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.973764   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:23.973774   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:23.973786   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:24.026122   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:24.026163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:24.040755   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:24.040784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:24.112224   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:24.112254   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:24.112277   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:24.195247   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:24.195281   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:26.738042   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:26.751545   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:26.751602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:26.786778   71929 cri.go:89] found id: ""
	I0717 01:58:26.786813   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.786824   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:26.786831   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:26.786889   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:26.828776   71929 cri.go:89] found id: ""
	I0717 01:58:26.828806   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.828818   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:26.828825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:26.828887   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:26.868439   71929 cri.go:89] found id: ""
	I0717 01:58:26.868468   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.868479   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:26.868486   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:26.868546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:26.900249   71929 cri.go:89] found id: ""
	I0717 01:58:26.900282   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.900292   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:26.900297   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:26.900344   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:26.933763   71929 cri.go:89] found id: ""
	I0717 01:58:26.933798   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.933808   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:26.933816   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:26.933882   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:26.968681   71929 cri.go:89] found id: ""
	I0717 01:58:26.968712   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.968722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:26.968729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:26.968788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:27.002081   71929 cri.go:89] found id: ""
	I0717 01:58:27.002113   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.002128   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:27.002135   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:27.002196   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:27.035138   71929 cri.go:89] found id: ""
	I0717 01:58:27.035161   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.035170   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:27.035177   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:27.035189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:27.091207   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:27.091244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:27.105765   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:27.105793   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:27.175533   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:27.175563   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:27.175580   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:27.260903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:27.260951   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:24.802002   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.803330   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.432259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:28.433226   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:27.280206   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.781139   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.802451   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:29.816503   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:29.816573   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:29.854887   71929 cri.go:89] found id: ""
	I0717 01:58:29.854921   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.854931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:29.854936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:29.854983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:29.887529   71929 cri.go:89] found id: ""
	I0717 01:58:29.887559   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.887570   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:29.887577   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:29.887638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:29.924995   71929 cri.go:89] found id: ""
	I0717 01:58:29.925020   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.925028   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:29.925034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:29.925091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:29.960064   71929 cri.go:89] found id: ""
	I0717 01:58:29.960092   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.960104   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:29.960111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:29.960178   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:29.995408   71929 cri.go:89] found id: ""
	I0717 01:58:29.995431   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.995438   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:29.995443   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:29.995494   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:30.028219   71929 cri.go:89] found id: ""
	I0717 01:58:30.028247   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.028254   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:30.028260   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:30.028309   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:30.062529   71929 cri.go:89] found id: ""
	I0717 01:58:30.062576   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.062589   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:30.062597   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:30.062664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:30.095854   71929 cri.go:89] found id: ""
	I0717 01:58:30.095882   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.095893   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:30.095904   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:30.095919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:30.148083   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:30.148114   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:30.161861   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:30.161892   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:30.236474   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:30.236503   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:30.236519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:30.319691   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:30.319720   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:28.804656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:31.302637   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:30.932659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.934225   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.279141   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:34.279312   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.867821   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:32.881480   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:32.881541   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:32.918289   71929 cri.go:89] found id: ""
	I0717 01:58:32.918316   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.918327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:32.918335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:32.918396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:32.955383   71929 cri.go:89] found id: ""
	I0717 01:58:32.955417   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.955426   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:32.955433   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:32.955498   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:32.990432   71929 cri.go:89] found id: ""
	I0717 01:58:32.990460   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.990467   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:32.990472   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:32.990531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:33.034653   71929 cri.go:89] found id: ""
	I0717 01:58:33.034685   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.034697   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:33.034703   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:33.034763   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:33.077875   71929 cri.go:89] found id: ""
	I0717 01:58:33.077911   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.077919   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:33.077926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:33.077988   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:33.114800   71929 cri.go:89] found id: ""
	I0717 01:58:33.114840   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.114852   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:33.114864   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:33.114946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:33.151095   71929 cri.go:89] found id: ""
	I0717 01:58:33.151229   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.151242   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:33.151249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:33.151324   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:33.190100   71929 cri.go:89] found id: ""
	I0717 01:58:33.190128   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.190138   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:33.190149   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:33.190163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:33.271195   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:33.271231   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:33.317539   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:33.317569   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.370188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:33.370224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:33.385016   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:33.385045   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:33.460017   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:35.960499   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:35.974504   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:35.974583   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:36.008652   71929 cri.go:89] found id: ""
	I0717 01:58:36.008696   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.008704   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:36.008710   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:36.008770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:36.044068   71929 cri.go:89] found id: ""
	I0717 01:58:36.044097   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.044106   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:36.044113   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:36.044174   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:36.083572   71929 cri.go:89] found id: ""
	I0717 01:58:36.083602   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.083610   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:36.083616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:36.083682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:36.116716   71929 cri.go:89] found id: ""
	I0717 01:58:36.116744   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.116753   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:36.116761   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:36.116820   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:36.156042   71929 cri.go:89] found id: ""
	I0717 01:58:36.156069   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.156080   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:36.156087   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:36.156148   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:36.192005   71929 cri.go:89] found id: ""
	I0717 01:58:36.192033   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.192045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:36.192055   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:36.192116   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:36.228720   71929 cri.go:89] found id: ""
	I0717 01:58:36.228751   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.228763   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:36.228769   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:36.228817   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:36.263835   71929 cri.go:89] found id: ""
	I0717 01:58:36.263862   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.263872   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:36.263882   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:36.263897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:36.278545   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:36.278609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:36.361182   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:36.361208   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:36.361225   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:36.447797   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:36.447832   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:36.492167   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:36.492196   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.304750   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.803867   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.432659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:37.433360   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.433481   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:36.282525   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:38.779592   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.045613   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:39.058615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:39.058688   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:39.094625   71929 cri.go:89] found id: ""
	I0717 01:58:39.094672   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.094684   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:39.094692   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:39.094755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:39.132856   71929 cri.go:89] found id: ""
	I0717 01:58:39.132887   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.132898   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:39.132905   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:39.132966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:39.171017   71929 cri.go:89] found id: ""
	I0717 01:58:39.171037   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.171044   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:39.171051   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:39.171112   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:39.210146   71929 cri.go:89] found id: ""
	I0717 01:58:39.210176   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.210186   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:39.210193   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:39.210269   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:39.244307   71929 cri.go:89] found id: ""
	I0717 01:58:39.244332   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.244342   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:39.244349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:39.244411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:39.279649   71929 cri.go:89] found id: ""
	I0717 01:58:39.279675   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.279682   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:39.279688   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:39.279755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:39.317699   71929 cri.go:89] found id: ""
	I0717 01:58:39.317726   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.317735   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:39.317742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:39.317789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:39.352319   71929 cri.go:89] found id: ""
	I0717 01:58:39.352351   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.352365   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:39.352377   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:39.352392   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:39.404153   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:39.404188   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:39.419796   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:39.419828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:39.495463   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:39.495485   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:39.495499   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:39.576742   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:39.576795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.132481   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:42.145588   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:42.145658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:42.181231   71929 cri.go:89] found id: ""
	I0717 01:58:42.181257   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.181265   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:42.181270   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:42.181321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:42.216876   71929 cri.go:89] found id: ""
	I0717 01:58:42.216905   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.216917   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:42.216923   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:42.216984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:42.256918   71929 cri.go:89] found id: ""
	I0717 01:58:42.256948   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.256959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:42.256967   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:42.257022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:42.291930   71929 cri.go:89] found id: ""
	I0717 01:58:42.291957   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.291964   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:42.291975   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:42.292035   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:42.329927   71929 cri.go:89] found id: ""
	I0717 01:58:42.329954   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.329964   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:42.329970   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:42.330014   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:42.364041   71929 cri.go:89] found id: ""
	I0717 01:58:42.364072   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.364085   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:42.364093   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:42.364150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:38.302060   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.302711   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.303560   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:41.437100   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.932845   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.780109   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.280118   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.400751   71929 cri.go:89] found id: ""
	I0717 01:58:42.400775   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.400784   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:42.400790   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:42.400840   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:42.438200   71929 cri.go:89] found id: ""
	I0717 01:58:42.438228   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.438240   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:42.438251   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:42.438265   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:42.455268   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:42.455303   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:42.537344   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:42.537368   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:42.537381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:42.618487   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:42.618522   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.661273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:42.661299   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:45.212631   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:45.226247   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:45.226330   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:45.263067   71929 cri.go:89] found id: ""
	I0717 01:58:45.263098   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.263110   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:45.263117   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:45.263177   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:45.299025   71929 cri.go:89] found id: ""
	I0717 01:58:45.299056   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.299067   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:45.299074   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:45.299137   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:45.346828   71929 cri.go:89] found id: ""
	I0717 01:58:45.346858   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.346868   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:45.346876   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:45.346938   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:45.390879   71929 cri.go:89] found id: ""
	I0717 01:58:45.390905   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.390913   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:45.390918   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:45.390966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:45.426794   71929 cri.go:89] found id: ""
	I0717 01:58:45.426823   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.426834   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:45.426841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:45.426902   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:45.463834   71929 cri.go:89] found id: ""
	I0717 01:58:45.463863   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.463873   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:45.463880   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:45.463942   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:45.500660   71929 cri.go:89] found id: ""
	I0717 01:58:45.500689   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.500701   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:45.500708   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:45.500766   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:45.537332   71929 cri.go:89] found id: ""
	I0717 01:58:45.537356   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.537364   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:45.537373   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:45.537388   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:45.551194   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:45.551222   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:45.623863   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:45.623892   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:45.623906   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:45.699740   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:45.699782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:45.739580   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:45.739613   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:44.803138   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:47.302471   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:46.434311   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.933004   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:45.779778   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.279595   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.300789   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:48.315608   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:48.315667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:48.353050   71929 cri.go:89] found id: ""
	I0717 01:58:48.353076   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.353084   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:48.353089   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:48.353133   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:48.394789   71929 cri.go:89] found id: ""
	I0717 01:58:48.394817   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.394829   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:48.394837   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:48.394900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:48.433430   71929 cri.go:89] found id: ""
	I0717 01:58:48.433457   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.433468   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:48.433475   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:48.433530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:48.467215   71929 cri.go:89] found id: ""
	I0717 01:58:48.467243   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.467254   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:48.467262   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:48.467318   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:48.501087   71929 cri.go:89] found id: ""
	I0717 01:58:48.501120   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.501131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:48.501138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:48.501204   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:48.538648   71929 cri.go:89] found id: ""
	I0717 01:58:48.538683   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.538696   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:48.538706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:48.538762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:48.573006   71929 cri.go:89] found id: ""
	I0717 01:58:48.573030   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.573040   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:48.573047   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:48.573106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:48.608779   71929 cri.go:89] found id: ""
	I0717 01:58:48.608803   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.608813   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:48.608824   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:48.608837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:48.659250   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:48.659290   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:48.673418   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:48.673449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:48.748175   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:48.748196   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:48.748207   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:48.824238   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:48.824274   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:51.367155   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:51.382458   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:51.382527   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:51.424005   71929 cri.go:89] found id: ""
	I0717 01:58:51.424040   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.424051   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:51.424059   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:51.424117   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:51.463318   71929 cri.go:89] found id: ""
	I0717 01:58:51.463348   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.463357   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:51.463363   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:51.463414   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:51.502261   71929 cri.go:89] found id: ""
	I0717 01:58:51.502290   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.502301   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:51.502309   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:51.502362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:51.536277   71929 cri.go:89] found id: ""
	I0717 01:58:51.536308   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.536319   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:51.536327   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:51.536392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:51.580598   71929 cri.go:89] found id: ""
	I0717 01:58:51.580629   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.580640   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:51.580648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:51.580726   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:51.618666   71929 cri.go:89] found id: ""
	I0717 01:58:51.618690   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.618697   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:51.618702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:51.618747   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:51.654742   71929 cri.go:89] found id: ""
	I0717 01:58:51.654777   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.654790   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:51.654799   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:51.654863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:51.698006   71929 cri.go:89] found id: ""
	I0717 01:58:51.698034   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.698043   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:51.698051   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:51.698062   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:51.754812   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:51.754852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:51.771887   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:51.771919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:51.859627   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:51.859657   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:51.859675   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:51.946633   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:51.946673   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:49.302540   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.803884   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.433981   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.933306   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:50.781428   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.279780   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:54.494188   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:54.509111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:54.509190   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:54.546424   71929 cri.go:89] found id: ""
	I0717 01:58:54.546454   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.546464   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:54.546471   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:54.546532   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:54.586811   71929 cri.go:89] found id: ""
	I0717 01:58:54.586841   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.586853   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:54.586860   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:54.586918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:54.627350   71929 cri.go:89] found id: ""
	I0717 01:58:54.627375   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.627383   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:54.627388   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:54.627438   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:54.665901   71929 cri.go:89] found id: ""
	I0717 01:58:54.665941   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.665954   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:54.665974   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:54.666041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:54.702921   71929 cri.go:89] found id: ""
	I0717 01:58:54.702948   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.702958   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:54.702965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:54.703027   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:54.737378   71929 cri.go:89] found id: ""
	I0717 01:58:54.737406   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.737414   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:54.737421   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:54.737469   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:54.771924   71929 cri.go:89] found id: ""
	I0717 01:58:54.771954   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.771964   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:54.771971   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:54.772055   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:54.812939   71929 cri.go:89] found id: ""
	I0717 01:58:54.812972   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.812983   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:54.812995   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:54.813010   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:54.862979   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:54.863013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:54.877467   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:54.877504   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:54.953924   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:54.953950   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:54.953963   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:55.032019   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:55.032052   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:54.302727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:56.311656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.933968   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:58.432611   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.778263   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.781311   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.278937   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.573130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:57.591689   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:57.591762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:57.626444   71929 cri.go:89] found id: ""
	I0717 01:58:57.626469   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.626479   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:57.626486   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:57.626570   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:57.661280   71929 cri.go:89] found id: ""
	I0717 01:58:57.661305   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.661314   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:57.661321   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:57.661376   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:57.695678   71929 cri.go:89] found id: ""
	I0717 01:58:57.695703   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.695711   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:57.695717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:57.695762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:57.729705   71929 cri.go:89] found id: ""
	I0717 01:58:57.729734   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.729742   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:57.729748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:57.729804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:57.763338   71929 cri.go:89] found id: ""
	I0717 01:58:57.763365   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.763373   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:57.763387   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:57.763433   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:57.800576   71929 cri.go:89] found id: ""
	I0717 01:58:57.800600   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.800608   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:57.800615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:57.800701   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:57.842401   71929 cri.go:89] found id: ""
	I0717 01:58:57.842428   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.842439   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:57.842446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:57.842503   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:57.880355   71929 cri.go:89] found id: ""
	I0717 01:58:57.880379   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.880387   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:57.880395   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:57.880412   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:57.938215   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:57.938252   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:57.952835   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:57.952876   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:58.027203   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:58.027231   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:58.027246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:58.108442   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:58.108483   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:00.648580   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:00.662596   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:00.662667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:00.696315   71929 cri.go:89] found id: ""
	I0717 01:59:00.696342   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.696351   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:00.696356   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:00.696411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:00.732117   71929 cri.go:89] found id: ""
	I0717 01:59:00.732147   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.732158   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:00.732164   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:00.732212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:00.768747   71929 cri.go:89] found id: ""
	I0717 01:59:00.768779   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.768790   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:00.768797   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:00.768856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:00.807557   71929 cri.go:89] found id: ""
	I0717 01:59:00.807585   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.807592   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:00.807598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:00.807651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:00.844127   71929 cri.go:89] found id: ""
	I0717 01:59:00.844152   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.844161   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:00.844166   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:00.844222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:00.879565   71929 cri.go:89] found id: ""
	I0717 01:59:00.879590   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.879597   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:00.879613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:00.879684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:00.917352   71929 cri.go:89] found id: ""
	I0717 01:59:00.917379   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.917387   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:00.917393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:00.917440   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:00.952603   71929 cri.go:89] found id: ""
	I0717 01:59:00.952630   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.952637   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:00.952647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:00.952688   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:01.007203   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:01.007242   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:01.021476   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:01.021512   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:01.102283   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:01.102306   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:01.102320   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:01.175736   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:01.175771   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:58.803034   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.803718   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.932781   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.433188   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:02.281269   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:04.779257   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.717612   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:03.732446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:03.732511   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:03.767485   71929 cri.go:89] found id: ""
	I0717 01:59:03.767519   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.767533   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:03.767542   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:03.767607   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:03.803961   71929 cri.go:89] found id: ""
	I0717 01:59:03.803989   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.804000   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:03.804007   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:03.804074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:03.842734   71929 cri.go:89] found id: ""
	I0717 01:59:03.842768   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.842780   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:03.842788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:03.842915   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:03.883571   71929 cri.go:89] found id: ""
	I0717 01:59:03.883598   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.883608   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:03.883616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:03.883682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:03.922037   71929 cri.go:89] found id: ""
	I0717 01:59:03.922065   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.922076   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:03.922084   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:03.922143   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:03.961135   71929 cri.go:89] found id: ""
	I0717 01:59:03.961165   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.961176   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:03.961183   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:03.961244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:03.995542   71929 cri.go:89] found id: ""
	I0717 01:59:03.995570   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.995580   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:03.995589   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:03.995647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:04.030142   71929 cri.go:89] found id: ""
	I0717 01:59:04.030170   71929 logs.go:276] 0 containers: []
	W0717 01:59:04.030178   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:04.030187   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:04.030198   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:04.110329   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:04.110366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:04.152194   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:04.152224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:04.204012   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:04.204048   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:04.218261   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:04.218291   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:04.290786   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:06.791166   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:06.806662   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:06.806722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:06.841447   71929 cri.go:89] found id: ""
	I0717 01:59:06.841476   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.841486   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:06.841494   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:06.841554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:06.879920   71929 cri.go:89] found id: ""
	I0717 01:59:06.879956   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.879971   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:06.879976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:06.880033   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:06.914436   71929 cri.go:89] found id: ""
	I0717 01:59:06.914465   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.914476   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:06.914484   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:06.914566   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:06.952098   71929 cri.go:89] found id: ""
	I0717 01:59:06.952127   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.952135   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:06.952141   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:06.952187   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:06.988054   71929 cri.go:89] found id: ""
	I0717 01:59:06.988085   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.988096   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:06.988103   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:06.988168   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:07.026633   71929 cri.go:89] found id: ""
	I0717 01:59:07.026658   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.026670   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:07.026676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:07.026732   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:07.064433   71929 cri.go:89] found id: ""
	I0717 01:59:07.064454   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.064463   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:07.064468   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:07.064545   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:07.108352   71929 cri.go:89] found id: ""
	I0717 01:59:07.108385   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.108396   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:07.108410   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:07.108428   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:07.163554   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:07.163593   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:07.177221   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:07.177249   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:07.249712   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:07.249735   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:07.249748   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:07.333011   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:07.333044   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:03.303048   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.304001   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.314317   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.932370   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.933031   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.933728   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:06.780342   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.279683   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.873187   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:09.887579   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:09.887658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:09.923675   71929 cri.go:89] found id: ""
	I0717 01:59:09.923706   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.923716   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:09.923724   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:09.923789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:09.961248   71929 cri.go:89] found id: ""
	I0717 01:59:09.961278   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.961288   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:09.961296   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:09.961354   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:10.000069   71929 cri.go:89] found id: ""
	I0717 01:59:10.000094   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.000101   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:10.000107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:10.000157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:10.036784   71929 cri.go:89] found id: ""
	I0717 01:59:10.036808   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.036815   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:10.036820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:10.036869   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:10.072746   71929 cri.go:89] found id: ""
	I0717 01:59:10.072778   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.072789   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:10.072796   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:10.072856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:10.109520   71929 cri.go:89] found id: ""
	I0717 01:59:10.109544   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.109552   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:10.109557   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:10.109608   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:10.142521   71929 cri.go:89] found id: ""
	I0717 01:59:10.142565   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.142576   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:10.142584   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:10.142647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:10.175772   71929 cri.go:89] found id: ""
	I0717 01:59:10.175800   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.175812   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:10.175823   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:10.175837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:10.213534   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:10.213561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:10.266449   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:10.266485   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:10.282204   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:10.282234   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:10.353974   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:10.354004   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:10.354017   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:09.802047   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.802200   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.433722   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:14.932285   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.780394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:13.781691   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.936509   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:12.951547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:12.951616   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:12.987833   71929 cri.go:89] found id: ""
	I0717 01:59:12.987860   71929 logs.go:276] 0 containers: []
	W0717 01:59:12.987868   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:12.987873   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:12.987922   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:13.026500   71929 cri.go:89] found id: ""
	I0717 01:59:13.026529   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.026539   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:13.026546   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:13.026625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:13.061631   71929 cri.go:89] found id: ""
	I0717 01:59:13.061664   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.061674   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:13.061682   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:13.061745   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:13.099449   71929 cri.go:89] found id: ""
	I0717 01:59:13.099476   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.099487   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:13.099494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:13.099565   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:13.137271   71929 cri.go:89] found id: ""
	I0717 01:59:13.137299   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.137309   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:13.137317   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:13.137384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:13.174432   71929 cri.go:89] found id: ""
	I0717 01:59:13.174462   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.174472   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:13.174478   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:13.174539   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:13.212820   71929 cri.go:89] found id: ""
	I0717 01:59:13.212845   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.212855   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:13.212865   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:13.212930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:13.245961   71929 cri.go:89] found id: ""
	I0717 01:59:13.245993   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.246004   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:13.246014   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:13.246028   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:13.284801   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:13.284828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.338476   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:13.338511   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:13.352751   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:13.352777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:13.434001   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:13.434035   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:13.434050   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.022525   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:16.036863   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:16.036941   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:16.074370   71929 cri.go:89] found id: ""
	I0717 01:59:16.074398   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.074409   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:16.074416   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:16.074476   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:16.112239   71929 cri.go:89] found id: ""
	I0717 01:59:16.112267   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.112276   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:16.112281   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:16.112329   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:16.147398   71929 cri.go:89] found id: ""
	I0717 01:59:16.147422   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.147429   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:16.147435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:16.147490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:16.182112   71929 cri.go:89] found id: ""
	I0717 01:59:16.182141   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.182149   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:16.182155   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:16.182203   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:16.219134   71929 cri.go:89] found id: ""
	I0717 01:59:16.219163   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.219174   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:16.219182   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:16.219238   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:16.255892   71929 cri.go:89] found id: ""
	I0717 01:59:16.255924   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.255934   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:16.255943   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:16.256003   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:16.291202   71929 cri.go:89] found id: ""
	I0717 01:59:16.291228   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.291238   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:16.291245   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:16.291304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:16.330748   71929 cri.go:89] found id: ""
	I0717 01:59:16.330779   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.330790   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:16.330801   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:16.330815   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:16.344628   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:16.344668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:16.415735   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:16.415761   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:16.415775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.499411   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:16.499449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:16.541244   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:16.541270   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.802477   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.311229   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.933493   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.934299   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.279421   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.778998   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:19.095060   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:19.107920   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:19.107976   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:19.143446   71929 cri.go:89] found id: ""
	I0717 01:59:19.143476   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.143485   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:19.143490   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:19.143550   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:19.179216   71929 cri.go:89] found id: ""
	I0717 01:59:19.179247   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.179259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:19.179266   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:19.179317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:19.212468   71929 cri.go:89] found id: ""
	I0717 01:59:19.212498   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.212508   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:19.212516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:19.212574   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:19.245019   71929 cri.go:89] found id: ""
	I0717 01:59:19.245047   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.245058   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:19.245065   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:19.245123   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:19.278430   71929 cri.go:89] found id: ""
	I0717 01:59:19.278457   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.278467   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:19.278474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:19.278530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:19.317685   71929 cri.go:89] found id: ""
	I0717 01:59:19.317714   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.317722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:19.317729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:19.317783   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:19.352938   71929 cri.go:89] found id: ""
	I0717 01:59:19.352974   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.352986   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:19.353000   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:19.353052   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:19.387238   71929 cri.go:89] found id: ""
	I0717 01:59:19.387272   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.387283   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:19.387295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:19.387314   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:19.440138   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:19.440171   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:19.456372   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:19.456402   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:19.527881   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:19.527906   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:19.527921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:19.611903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:19.611937   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:22.160422   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:22.172802   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:22.172862   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:22.209283   71929 cri.go:89] found id: ""
	I0717 01:59:22.209315   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.209327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:22.209335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:22.209396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:22.243927   71929 cri.go:89] found id: ""
	I0717 01:59:22.243955   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.243965   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:22.243972   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:22.244022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:22.276730   71929 cri.go:89] found id: ""
	I0717 01:59:22.276754   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.276761   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:22.276767   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:22.276814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:22.319378   71929 cri.go:89] found id: ""
	I0717 01:59:22.319407   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.319418   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:22.319425   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:22.319482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:22.358272   71929 cri.go:89] found id: ""
	I0717 01:59:22.358298   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.358307   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:22.358312   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:22.358362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:22.395358   71929 cri.go:89] found id: ""
	I0717 01:59:22.395393   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.395405   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:22.395414   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:22.395477   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:18.802881   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.303532   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.433636   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.932345   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.279596   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.279700   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.280649   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:22.435158   71929 cri.go:89] found id: ""
	I0717 01:59:22.435184   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.435194   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:22.435201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:22.435248   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:22.471553   71929 cri.go:89] found id: ""
	I0717 01:59:22.471588   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.471595   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:22.471604   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:22.471616   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:22.523133   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:22.523169   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:22.539212   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:22.539246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:22.615707   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:22.615729   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:22.615744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:22.696758   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:22.696795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:25.238496   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:25.252882   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:25.252946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:25.290173   71929 cri.go:89] found id: ""
	I0717 01:59:25.290197   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.290205   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:25.290210   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:25.290263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:25.325926   71929 cri.go:89] found id: ""
	I0717 01:59:25.325968   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.325979   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:25.325985   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:25.326032   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:25.358310   71929 cri.go:89] found id: ""
	I0717 01:59:25.358362   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.358371   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:25.358377   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:25.358426   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:25.393575   71929 cri.go:89] found id: ""
	I0717 01:59:25.393605   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.393615   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:25.393622   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:25.393684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:25.429357   71929 cri.go:89] found id: ""
	I0717 01:59:25.429448   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.429466   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:25.429474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:25.429546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:25.466992   71929 cri.go:89] found id: ""
	I0717 01:59:25.467020   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.467028   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:25.467034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:25.467080   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:25.503545   71929 cri.go:89] found id: ""
	I0717 01:59:25.503575   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.503587   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:25.503594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:25.503643   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:25.542957   71929 cri.go:89] found id: ""
	I0717 01:59:25.542983   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.542993   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:25.543003   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:25.543015   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:25.598813   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:25.598852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:25.618060   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:25.618098   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:25.690079   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:25.690105   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:25.690119   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:25.765956   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:25.765994   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:23.803366   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.804525   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.932447   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.933276   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.933461   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.286160   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.781318   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:28.311715   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:28.325493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:28.325554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:28.365783   71929 cri.go:89] found id: ""
	I0717 01:59:28.365810   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.365821   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:28.365829   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:28.365885   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:28.401847   71929 cri.go:89] found id: ""
	I0717 01:59:28.401875   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.401883   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:28.401895   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:28.401954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:28.442236   71929 cri.go:89] found id: ""
	I0717 01:59:28.442261   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.442272   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:28.442278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:28.442340   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:28.476832   71929 cri.go:89] found id: ""
	I0717 01:59:28.476857   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.476866   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:28.476873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:28.476928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:28.512040   71929 cri.go:89] found id: ""
	I0717 01:59:28.512068   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.512075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:28.512081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:28.512136   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:28.547516   71929 cri.go:89] found id: ""
	I0717 01:59:28.547547   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.547558   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:28.547566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:28.547625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:28.580380   71929 cri.go:89] found id: ""
	I0717 01:59:28.580406   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.580417   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:28.580427   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:28.580485   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:28.616029   71929 cri.go:89] found id: ""
	I0717 01:59:28.616059   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.616069   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:28.616080   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:28.616095   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:28.670188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:28.670230   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:28.687315   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:28.687355   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:28.763591   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:28.763612   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:28.763627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:28.848925   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:28.848959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:31.388294   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:31.404748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:31.404814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:31.446437   71929 cri.go:89] found id: ""
	I0717 01:59:31.446468   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.446478   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:31.446484   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:31.446531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:31.487797   71929 cri.go:89] found id: ""
	I0717 01:59:31.487828   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.487840   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:31.487847   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:31.487895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:31.525327   71929 cri.go:89] found id: ""
	I0717 01:59:31.525354   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.525368   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:31.525375   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:31.525436   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:31.564106   71929 cri.go:89] found id: ""
	I0717 01:59:31.564154   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.564166   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:31.564173   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:31.564234   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:31.603345   71929 cri.go:89] found id: ""
	I0717 01:59:31.603374   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.603385   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:31.603393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:31.603456   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:31.641727   71929 cri.go:89] found id: ""
	I0717 01:59:31.641753   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.641769   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:31.641776   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:31.641837   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:31.680825   71929 cri.go:89] found id: ""
	I0717 01:59:31.680856   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.680866   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:31.680873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:31.680930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:31.714325   71929 cri.go:89] found id: ""
	I0717 01:59:31.714348   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.714355   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:31.714363   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:31.714374   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:31.765899   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:31.765934   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:31.781417   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:31.781447   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:31.857586   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:31.857607   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:31.857622   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:31.937171   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:31.937197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:28.304014   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:30.802684   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:32.803604   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.933945   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.435259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.785641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.279814   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.478176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:34.492153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:34.492223   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:34.526959   71929 cri.go:89] found id: ""
	I0717 01:59:34.526984   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.526998   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:34.527006   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:34.527064   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:34.564485   71929 cri.go:89] found id: ""
	I0717 01:59:34.564534   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.564546   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:34.564591   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:34.564706   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:34.604611   71929 cri.go:89] found id: ""
	I0717 01:59:34.604637   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.604649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:34.604657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:34.604718   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:34.640851   71929 cri.go:89] found id: ""
	I0717 01:59:34.640882   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.640892   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:34.640897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:34.640956   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:34.675828   71929 cri.go:89] found id: ""
	I0717 01:59:34.675856   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.675863   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:34.675869   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:34.675918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:34.710468   71929 cri.go:89] found id: ""
	I0717 01:59:34.710496   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.710506   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:34.710514   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:34.710595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:34.749218   71929 cri.go:89] found id: ""
	I0717 01:59:34.749249   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.749260   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:34.749267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:34.749328   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:34.784934   71929 cri.go:89] found id: ""
	I0717 01:59:34.784969   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.784979   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:34.784990   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:34.785006   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:34.799836   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:34.799870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:34.870218   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:34.870239   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:34.870254   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:34.948782   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:34.948817   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:34.992295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:34.992324   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:34.803649   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.304530   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.933199   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:39.432504   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.280185   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:38.280499   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.545759   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:37.559648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:37.559724   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:37.596642   71929 cri.go:89] found id: ""
	I0717 01:59:37.596696   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.596707   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:37.596715   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:37.596770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:37.637251   71929 cri.go:89] found id: ""
	I0717 01:59:37.637283   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.637312   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:37.637318   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:37.637372   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:37.672811   71929 cri.go:89] found id: ""
	I0717 01:59:37.672839   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.672847   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:37.672852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:37.672909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:37.706864   71929 cri.go:89] found id: ""
	I0717 01:59:37.706904   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.706916   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:37.706923   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:37.706975   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:37.747539   71929 cri.go:89] found id: ""
	I0717 01:59:37.747567   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.747576   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:37.747581   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:37.747630   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:37.785229   71929 cri.go:89] found id: ""
	I0717 01:59:37.785251   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.785260   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:37.785268   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:37.785333   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:37.840428   71929 cri.go:89] found id: ""
	I0717 01:59:37.840460   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.840471   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:37.840477   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:37.840533   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:37.876888   71929 cri.go:89] found id: ""
	I0717 01:59:37.876916   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.876924   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:37.876932   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:37.876942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:37.926161   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:37.926197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:37.940857   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:37.940885   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:38.019210   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:38.019232   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:38.019245   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:38.112428   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:38.112471   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:40.657215   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:40.670824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:40.670900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:40.704008   71929 cri.go:89] found id: ""
	I0717 01:59:40.704030   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.704040   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:40.704048   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:40.704102   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:40.739544   71929 cri.go:89] found id: ""
	I0717 01:59:40.739576   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.739587   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:40.739595   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:40.739664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:40.773132   71929 cri.go:89] found id: ""
	I0717 01:59:40.773159   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.773169   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:40.773177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:40.773239   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:40.810162   71929 cri.go:89] found id: ""
	I0717 01:59:40.810183   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.810193   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:40.810200   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:40.810256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:40.844797   71929 cri.go:89] found id: ""
	I0717 01:59:40.844829   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.844840   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:40.844847   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:40.844918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:40.884444   71929 cri.go:89] found id: ""
	I0717 01:59:40.884468   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.884476   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:40.884482   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:40.884544   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:40.919413   71929 cri.go:89] found id: ""
	I0717 01:59:40.919437   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.919445   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:40.919451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:40.919505   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:40.961870   71929 cri.go:89] found id: ""
	I0717 01:59:40.961894   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.961902   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:40.961910   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:40.961921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:41.010600   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:41.010638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:41.025557   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:41.025589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:41.100100   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:41.100123   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:41.100135   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:41.185809   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:41.185850   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:39.802297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.802803   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.432998   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.433412   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:40.779796   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:42.781981   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.279014   71522 pod_ready.go:81] duration metric: took 4m0.006085275s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:59:43.279043   71522 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:59:43.279053   71522 pod_ready.go:38] duration metric: took 4m2.008175999s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:59:43.279073   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:59:43.279105   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.279162   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.327674   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:43.327725   71522 cri.go:89] found id: ""
	I0717 01:59:43.327734   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:43.327801   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.332247   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.332303   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.371598   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.371627   71522 cri.go:89] found id: ""
	I0717 01:59:43.371635   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:43.371683   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.377203   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.377265   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.416351   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.416374   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.416380   71522 cri.go:89] found id: ""
	I0717 01:59:43.416389   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:43.416448   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.420909   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.425228   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.425278   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.472117   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.472139   71522 cri.go:89] found id: ""
	I0717 01:59:43.472147   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:43.472194   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.476632   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.476698   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.517337   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:43.517360   71522 cri.go:89] found id: ""
	I0717 01:59:43.517369   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:43.517430   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.522437   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.522519   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.564511   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.564530   71522 cri.go:89] found id: ""
	I0717 01:59:43.564537   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:43.564595   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.570357   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.570440   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:43.615389   71522 cri.go:89] found id: ""
	I0717 01:59:43.615418   71522 logs.go:276] 0 containers: []
	W0717 01:59:43.615427   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:43.615433   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:43.615543   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:43.652739   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:43.652764   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:43.652769   71522 cri.go:89] found id: ""
	I0717 01:59:43.652777   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:43.652835   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.657323   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.661682   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:43.661702   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.714396   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:43.714434   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.761072   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:43.761110   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:43.825934   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:43.825963   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.871287   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:43.871316   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.907488   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:43.907517   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.949876   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:43.949903   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:44.093084   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:44.093116   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:44.153161   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:44.153206   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:44.197219   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:44.197249   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:44.242441   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:44.242478   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:44.288622   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.288646   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.839680   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.839712   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.854119   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:44.854145   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.725542   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:43.739304   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.739379   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.776754   71929 cri.go:89] found id: ""
	I0717 01:59:43.776783   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.776795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:43.776802   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.776863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.819729   71929 cri.go:89] found id: ""
	I0717 01:59:43.819756   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.819767   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:43.819774   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.819828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.860283   71929 cri.go:89] found id: ""
	I0717 01:59:43.860311   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.860322   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:43.860329   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.860391   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.898684   71929 cri.go:89] found id: ""
	I0717 01:59:43.898712   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.898722   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:43.898729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.898788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.942996   71929 cri.go:89] found id: ""
	I0717 01:59:43.943019   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.943026   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:43.943031   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.943077   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.981799   71929 cri.go:89] found id: ""
	I0717 01:59:43.981828   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.981839   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:43.981846   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.981903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:44.018222   71929 cri.go:89] found id: ""
	I0717 01:59:44.018252   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.018262   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:44.018267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:44.018326   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:44.056264   71929 cri.go:89] found id: ""
	I0717 01:59:44.056293   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.056304   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:44.056315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.056334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.172061   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:44.172108   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:44.219597   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:44.219627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:44.272299   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.272334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.287811   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:44.287848   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:44.379183   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:46.879529   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:46.893142   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:46.893207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:46.929073   71929 cri.go:89] found id: ""
	I0717 01:59:46.929101   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.929113   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:46.929121   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:46.929173   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:46.963697   71929 cri.go:89] found id: ""
	I0717 01:59:46.963725   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.963733   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:46.963739   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:46.963798   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.000697   71929 cri.go:89] found id: ""
	I0717 01:59:47.000730   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.000747   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:47.000752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.000804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.037270   71929 cri.go:89] found id: ""
	I0717 01:59:47.037304   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.037316   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:47.037323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.037382   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.072210   71929 cri.go:89] found id: ""
	I0717 01:59:47.072238   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.072249   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:47.072256   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.072321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.108404   71929 cri.go:89] found id: ""
	I0717 01:59:47.108432   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.108443   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:47.108451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.108535   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.146122   71929 cri.go:89] found id: ""
	I0717 01:59:47.146151   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.146162   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.146169   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:47.146225   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:47.187418   71929 cri.go:89] found id: ""
	I0717 01:59:47.187446   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.187455   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:47.187466   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:47.187481   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:47.201023   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:47.201053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:47.269851   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:47.269878   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.269894   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:47.356417   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:47.356456   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.803326   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:46.302939   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:45.433688   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.933271   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:49.934222   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.403005   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:47.420984   71522 api_server.go:72] duration metric: took 4m13.369710312s to wait for apiserver process to appear ...
	I0717 01:59:47.421011   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:59:47.421065   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:47.421128   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:47.465800   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:47.465830   71522 cri.go:89] found id: ""
	I0717 01:59:47.465838   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:47.465890   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.470561   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:47.470628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:47.513302   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:47.513321   71522 cri.go:89] found id: ""
	I0717 01:59:47.513328   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:47.513373   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.517668   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:47.517720   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.563466   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:47.563491   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:47.563495   71522 cri.go:89] found id: ""
	I0717 01:59:47.563502   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:47.563563   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.568058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.572381   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.572432   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.618919   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:47.618944   71522 cri.go:89] found id: ""
	I0717 01:59:47.618953   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:47.619014   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.623475   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.623525   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.662294   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:47.662321   71522 cri.go:89] found id: ""
	I0717 01:59:47.662329   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:47.662384   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.666740   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.666806   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.708962   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:47.708990   71522 cri.go:89] found id: ""
	I0717 01:59:47.708999   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:47.709058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.713551   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.713628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.750766   71522 cri.go:89] found id: ""
	I0717 01:59:47.750797   71522 logs.go:276] 0 containers: []
	W0717 01:59:47.750807   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.750814   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:47.750878   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:47.786664   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:47.786687   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:47.786692   71522 cri.go:89] found id: ""
	I0717 01:59:47.786699   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:47.786761   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.791460   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.795553   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.795576   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:48.298229   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:48.298271   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:48.313542   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:48.313573   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:48.429625   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:48.429663   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:48.475651   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:48.475677   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:48.514075   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:48.514101   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:48.550152   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:48.550182   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:48.592743   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:48.592771   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:48.652433   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:48.652464   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:48.699763   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:48.699796   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:48.737467   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:48.737504   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:48.788389   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:48.788425   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:48.842323   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:48.842357   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:48.900716   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:48.900746   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:47.397763   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:47.397791   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:49.954670   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:49.968840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:49.968898   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:50.003598   71929 cri.go:89] found id: ""
	I0717 01:59:50.003635   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.003646   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:50.003654   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:50.003714   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:50.040494   71929 cri.go:89] found id: ""
	I0717 01:59:50.040546   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.040558   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:50.040564   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:50.040624   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:50.074921   71929 cri.go:89] found id: ""
	I0717 01:59:50.074950   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.074959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:50.074965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:50.075015   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:50.117002   71929 cri.go:89] found id: ""
	I0717 01:59:50.117030   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.117041   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:50.117049   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:50.117106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:50.163026   71929 cri.go:89] found id: ""
	I0717 01:59:50.163052   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.163063   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:50.163071   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:50.163129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:50.197709   71929 cri.go:89] found id: ""
	I0717 01:59:50.197738   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.197749   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:50.197757   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:50.197838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:50.237776   71929 cri.go:89] found id: ""
	I0717 01:59:50.237808   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.237819   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:50.237827   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:50.237886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:50.275147   71929 cri.go:89] found id: ""
	I0717 01:59:50.275179   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.275189   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:50.275201   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:50.275215   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:50.329025   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:50.329057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:50.342745   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:50.342777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:50.417792   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:50.417817   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:50.417829   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:50.495288   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:50.495322   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:48.306102   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:50.804255   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:52.433248   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:54.931595   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:51.447495   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:59:51.452186   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:59:51.453112   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:59:51.453137   71522 api_server.go:131] duration metric: took 4.032118004s to wait for apiserver health ...
	I0717 01:59:51.453146   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:59:51.453170   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:51.453215   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:51.491272   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:51.491297   71522 cri.go:89] found id: ""
	I0717 01:59:51.491305   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:51.491365   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.495747   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:51.495795   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:51.538807   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:51.538830   71522 cri.go:89] found id: ""
	I0717 01:59:51.538838   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:51.538891   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.543454   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:51.543512   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:51.586258   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:51.586292   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:51.586296   71522 cri.go:89] found id: ""
	I0717 01:59:51.586306   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:51.586360   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.590446   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.594867   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:51.594936   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:51.636079   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:51.636101   71522 cri.go:89] found id: ""
	I0717 01:59:51.636108   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:51.636159   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.640225   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:51.640283   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:51.676395   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:51.676422   71522 cri.go:89] found id: ""
	I0717 01:59:51.676432   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:51.676496   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.680974   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:51.681043   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:51.720449   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.720476   71522 cri.go:89] found id: ""
	I0717 01:59:51.720483   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:51.720527   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.724704   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:51.724779   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:51.762892   71522 cri.go:89] found id: ""
	I0717 01:59:51.762923   71522 logs.go:276] 0 containers: []
	W0717 01:59:51.762932   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:51.762939   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:51.762986   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:51.803675   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.803702   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.803707   71522 cri.go:89] found id: ""
	I0717 01:59:51.803714   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:51.803807   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.808188   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.812046   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:51.812065   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:51.855800   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:51.855832   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.917804   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:51.917833   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.958797   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:51.958827   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.997003   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:51.997034   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:52.118345   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:52.118381   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:52.174308   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:52.174344   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:52.578823   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:52.578857   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:52.619962   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:52.619994   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:52.667564   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:52.667593   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:52.714716   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:52.714747   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:52.774123   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:52.774171   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:52.788399   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:52.788432   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:52.839796   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:52.839828   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:55.388404   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:59:55.388441   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.388448   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.388453   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.388458   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.388465   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.388469   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.388473   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.388484   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.388491   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.388499   71522 system_pods.go:74] duration metric: took 3.93534618s to wait for pod list to return data ...
	I0717 01:59:55.388509   71522 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:59:55.390798   71522 default_sa.go:45] found service account: "default"
	I0717 01:59:55.390829   71522 default_sa.go:55] duration metric: took 2.313714ms for default service account to be created ...
	I0717 01:59:55.390840   71522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:59:55.399028   71522 system_pods.go:86] 9 kube-system pods found
	I0717 01:59:55.399049   71522 system_pods.go:89] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.399054   71522 system_pods.go:89] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.399059   71522 system_pods.go:89] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.399063   71522 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.399068   71522 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.399072   71522 system_pods.go:89] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.399076   71522 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.399083   71522 system_pods.go:89] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.399090   71522 system_pods.go:89] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.399099   71522 system_pods.go:126] duration metric: took 8.253468ms to wait for k8s-apps to be running ...
	I0717 01:59:55.399108   71522 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:59:55.399152   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:59:55.417081   71522 system_svc.go:56] duration metric: took 17.965716ms WaitForService to wait for kubelet
	I0717 01:59:55.417109   71522 kubeadm.go:582] duration metric: took 4m21.36584166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:59:55.417130   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:59:55.420078   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:59:55.420099   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:59:55.420109   71522 node_conditions.go:105] duration metric: took 2.974324ms to run NodePressure ...
	I0717 01:59:55.420119   71522 start.go:241] waiting for startup goroutines ...
	I0717 01:59:55.420126   71522 start.go:246] waiting for cluster config update ...
	I0717 01:59:55.420136   71522 start.go:255] writing updated cluster config ...
	I0717 01:59:55.420406   71522 ssh_runner.go:195] Run: rm -f paused
	I0717 01:59:55.470793   71522 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:59:55.472960   71522 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-738184" cluster and "default" namespace by default
	I0717 01:59:53.036151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:53.049820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:53.049879   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:53.087144   71929 cri.go:89] found id: ""
	I0717 01:59:53.087175   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.087189   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:53.087195   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:53.087253   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:53.123135   71929 cri.go:89] found id: ""
	I0717 01:59:53.123164   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.123175   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:53.123191   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:53.123254   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:53.157887   71929 cri.go:89] found id: ""
	I0717 01:59:53.157912   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.157922   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:53.157927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:53.158004   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:53.201002   71929 cri.go:89] found id: ""
	I0717 01:59:53.201033   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.201045   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:53.201054   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:53.201115   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:53.236159   71929 cri.go:89] found id: ""
	I0717 01:59:53.236188   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.236198   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:53.236204   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:53.236258   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:53.277585   71929 cri.go:89] found id: ""
	I0717 01:59:53.277616   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.277627   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:53.277634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:53.277694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:53.322722   71929 cri.go:89] found id: ""
	I0717 01:59:53.322747   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.322758   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:53.322765   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:53.322824   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:53.364112   71929 cri.go:89] found id: ""
	I0717 01:59:53.364138   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.364149   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:53.364159   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:53.364172   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:53.418701   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:53.418739   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:53.435004   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:53.435030   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:53.511254   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:53.511274   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:53.511287   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:53.587967   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:53.588003   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:56.130773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:56.144742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:56.144811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:56.180267   71929 cri.go:89] found id: ""
	I0717 01:59:56.180295   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.180306   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:56.180313   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:56.180373   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:56.217223   71929 cri.go:89] found id: ""
	I0717 01:59:56.217252   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.217263   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:56.217269   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:56.217334   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:56.251714   71929 cri.go:89] found id: ""
	I0717 01:59:56.251738   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.251745   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:56.251752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:56.251805   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:56.292557   71929 cri.go:89] found id: ""
	I0717 01:59:56.292589   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.292597   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:56.292603   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:56.292653   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:56.332463   71929 cri.go:89] found id: ""
	I0717 01:59:56.332491   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.332501   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:56.332508   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:56.332562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:56.372155   71929 cri.go:89] found id: ""
	I0717 01:59:56.372180   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.372189   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:56.372197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:56.372255   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:56.415768   71929 cri.go:89] found id: ""
	I0717 01:59:56.415794   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.415806   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:56.415813   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:56.415871   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:56.456920   71929 cri.go:89] found id: ""
	I0717 01:59:56.456951   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.456959   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:56.456968   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:56.456978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:56.508932   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:56.508965   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:56.522496   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:56.522531   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:56.596839   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:56.596857   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:56.596870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:56.679237   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:56.679271   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:53.303565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:55.803725   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:57.806129   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:56.933245   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.432536   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.220084   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:59.233108   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:59.233182   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:59.266796   71929 cri.go:89] found id: ""
	I0717 01:59:59.266827   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.266838   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:59.266845   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:59.266909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:59.297992   71929 cri.go:89] found id: ""
	I0717 01:59:59.298017   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.298026   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:59.298032   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:59.298087   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:59.331953   71929 cri.go:89] found id: ""
	I0717 01:59:59.331982   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.331993   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:59.331999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:59.332069   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:59.368912   71929 cri.go:89] found id: ""
	I0717 01:59:59.368939   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.368948   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:59.368954   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:59.369002   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:59.402886   71929 cri.go:89] found id: ""
	I0717 01:59:59.402911   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.402920   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:59.402926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:59.402982   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:59.441227   71929 cri.go:89] found id: ""
	I0717 01:59:59.441249   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.441257   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:59.441263   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:59.441322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:59.479154   71929 cri.go:89] found id: ""
	I0717 01:59:59.479191   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.479213   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:59.479222   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:59.479286   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:59.516259   71929 cri.go:89] found id: ""
	I0717 01:59:59.516299   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.516309   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:59.516319   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:59.516332   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:59.596352   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:59.596385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:59.639712   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:59.639744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:59.691399   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:59.691444   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:59.706618   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:59.706648   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:59.778875   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.279246   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:02.293212   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:02.293284   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:02.330759   71929 cri.go:89] found id: ""
	I0717 02:00:02.330786   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.330795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:02.330800   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:02.330848   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:02.366257   71929 cri.go:89] found id: ""
	I0717 02:00:02.366287   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.366298   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:02.366305   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:02.366368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:00.303868   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.311063   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:01.432671   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:03.433059   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.404321   71929 cri.go:89] found id: ""
	I0717 02:00:02.404348   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.404358   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:02.404364   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:02.404432   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:02.444297   71929 cri.go:89] found id: ""
	I0717 02:00:02.444326   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.444342   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:02.444349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:02.444406   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:02.478433   71929 cri.go:89] found id: ""
	I0717 02:00:02.478466   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.478477   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:02.478483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:02.478530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:02.515519   71929 cri.go:89] found id: ""
	I0717 02:00:02.515551   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.515560   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:02.515566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:02.515618   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:02.551006   71929 cri.go:89] found id: ""
	I0717 02:00:02.551030   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.551038   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:02.551044   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:02.551110   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:02.588312   71929 cri.go:89] found id: ""
	I0717 02:00:02.588345   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.588356   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:02.588367   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:02.588381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:02.641900   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:02.641932   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:02.656851   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:02.656896   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:02.728286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.728315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:02.728327   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:02.806807   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:02.806847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.355196   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:05.369148   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:05.369231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:05.405012   71929 cri.go:89] found id: ""
	I0717 02:00:05.405045   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.405057   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:05.405068   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:05.405132   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:05.450524   71929 cri.go:89] found id: ""
	I0717 02:00:05.450564   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.450575   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:05.450582   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:05.450637   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:05.487503   71929 cri.go:89] found id: ""
	I0717 02:00:05.487533   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.487544   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:05.487553   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:05.487634   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:05.522607   71929 cri.go:89] found id: ""
	I0717 02:00:05.522635   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.522650   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:05.522656   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:05.522703   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:05.558091   71929 cri.go:89] found id: ""
	I0717 02:00:05.558120   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.558131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:05.558138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:05.558192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:05.594540   71929 cri.go:89] found id: ""
	I0717 02:00:05.594587   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.594598   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:05.594605   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:05.594668   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:05.631783   71929 cri.go:89] found id: ""
	I0717 02:00:05.631807   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.631818   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:05.631825   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:05.631886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:05.667494   71929 cri.go:89] found id: ""
	I0717 02:00:05.667523   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.667532   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:05.667543   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:05.667559   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:05.681348   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:05.681373   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:05.747143   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:05.747165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:05.747176   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:05.829639   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:05.829674   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.881984   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:05.882013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:04.803913   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.302068   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:05.434869   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.435174   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:09.931879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:08.435873   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:08.449840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:08.449901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:08.489613   71929 cri.go:89] found id: ""
	I0717 02:00:08.489663   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.489675   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:08.489684   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:08.489751   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:08.526604   71929 cri.go:89] found id: ""
	I0717 02:00:08.526635   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.526645   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:08.526660   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:08.526717   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:08.563202   71929 cri.go:89] found id: ""
	I0717 02:00:08.563227   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.563234   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:08.563240   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:08.563299   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:08.598336   71929 cri.go:89] found id: ""
	I0717 02:00:08.598365   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.598376   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:08.598383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:08.598441   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:08.632626   71929 cri.go:89] found id: ""
	I0717 02:00:08.632660   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.632671   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:08.632678   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:08.632739   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:08.667951   71929 cri.go:89] found id: ""
	I0717 02:00:08.667977   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.667993   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:08.668001   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:08.668059   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:08.702106   71929 cri.go:89] found id: ""
	I0717 02:00:08.702135   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.702146   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:08.702153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:08.702212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:08.733469   71929 cri.go:89] found id: ""
	I0717 02:00:08.733491   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.733499   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:08.733508   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:08.733518   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:08.787930   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:08.787966   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:08.802761   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:08.802795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:08.878115   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:08.878138   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:08.878149   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:08.962509   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:08.962543   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:11.503151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:11.518019   71929 kubeadm.go:597] duration metric: took 4m3.576613508s to restartPrimaryControlPlane
	W0717 02:00:11.518087   71929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:00:11.518113   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:00:11.970514   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:11.986794   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:00:11.997382   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:00:12.006789   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:00:12.006816   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:00:12.006867   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:00:12.015864   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:00:12.015921   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:00:12.025239   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:00:12.034315   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:00:12.034373   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:00:12.043533   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.052344   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:00:12.052393   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.061290   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:00:12.070311   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:00:12.070375   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:00:12.080404   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:00:12.318084   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:00:09.303502   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.803893   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.933539   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:14.433949   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:13.804007   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.303079   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.932416   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.932721   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.303306   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:20.306811   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:22.803374   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:21.433157   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:23.433283   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:24.805822   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.301985   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:25.931740   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.934346   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:29.302199   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:31.302607   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:30.433033   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:32.434743   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:34.933166   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:33.802140   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:35.803338   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:36.933672   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:39.432879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:38.302050   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:40.803322   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:41.932491   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:44.436201   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:43.302028   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:45.801979   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.303644   71146 pod_ready.go:81] duration metric: took 4m0.007411484s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 02:00:47.303668   71146 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 02:00:47.303678   71146 pod_ready.go:38] duration metric: took 4m7.053721739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:00:47.303694   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:00:47.303725   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:47.303791   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:47.365247   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:47.365272   71146 cri.go:89] found id: ""
	I0717 02:00:47.365279   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:47.365339   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.370201   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:47.370268   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:47.416627   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:47.416652   71146 cri.go:89] found id: ""
	I0717 02:00:47.416663   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:47.416731   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.421295   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:47.421454   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:47.463532   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.463556   71146 cri.go:89] found id: ""
	I0717 02:00:47.463564   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:47.463626   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.468291   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:47.468414   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:47.504328   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:47.504354   71146 cri.go:89] found id: ""
	I0717 02:00:47.504362   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:47.504445   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.508821   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:47.508880   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:47.550970   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.550996   71146 cri.go:89] found id: ""
	I0717 02:00:47.551006   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:47.551069   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.555974   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:47.556045   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:47.609884   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:47.609903   71146 cri.go:89] found id: ""
	I0717 02:00:47.609910   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:47.609968   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.615544   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:47.615603   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:47.653071   71146 cri.go:89] found id: ""
	I0717 02:00:47.653099   71146 logs.go:276] 0 containers: []
	W0717 02:00:47.653110   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:47.653117   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:47.653163   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:47.690462   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.690485   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:47.690490   71146 cri.go:89] found id: ""
	I0717 02:00:47.690498   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:47.690545   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.695196   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.699099   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:47.699117   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:47.816750   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:47.816782   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:46.932764   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:49.432402   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.869306   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:47.869341   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.906717   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:47.906755   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.944125   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:47.944152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.978632   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:47.978664   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:48.482628   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:48.482660   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:48.538252   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:48.538300   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:48.553011   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:48.553038   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:48.607632   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:48.607666   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:48.646122   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:48.646151   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:48.689948   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:48.689980   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:48.738285   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:48.738334   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.290996   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:51.308850   71146 api_server.go:72] duration metric: took 4m18.27461618s to wait for apiserver process to appear ...
	I0717 02:00:51.308873   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:00:51.308907   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:51.308967   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:51.350827   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.350857   71146 cri.go:89] found id: ""
	I0717 02:00:51.350866   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:51.350930   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.355308   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:51.355369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:51.393804   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.393831   71146 cri.go:89] found id: ""
	I0717 02:00:51.393840   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:51.393897   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.398144   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:51.398201   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:51.437974   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:51.437991   71146 cri.go:89] found id: ""
	I0717 02:00:51.437998   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:51.438044   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.442318   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:51.442382   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:51.478462   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.478481   71146 cri.go:89] found id: ""
	I0717 02:00:51.478489   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:51.478534   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.482624   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:51.482672   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:51.526089   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.526114   71146 cri.go:89] found id: ""
	I0717 02:00:51.526123   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:51.526170   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.530855   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:51.530923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:51.568875   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.568899   71146 cri.go:89] found id: ""
	I0717 02:00:51.568908   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:51.568972   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.573300   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:51.573369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:51.615775   71146 cri.go:89] found id: ""
	I0717 02:00:51.615800   71146 logs.go:276] 0 containers: []
	W0717 02:00:51.615809   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:51.615815   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:51.615876   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:51.658100   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:51.658124   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:51.658130   71146 cri.go:89] found id: ""
	I0717 02:00:51.658138   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:51.658183   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.663030   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.667348   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:51.667372   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.715502   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:51.715534   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.763431   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:51.763457   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.805523   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:51.805553   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.859660   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:51.859692   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:51.963831   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:51.963858   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:51.978152   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:51.978179   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:52.023897   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:52.023926   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:52.062193   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:52.062218   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:52.098487   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:52.098518   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:52.135733   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:52.135758   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:52.562245   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:52.562279   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:52.624258   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:52.624288   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:51.434060   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:53.933730   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:55.176270   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 02:00:55.180760   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 02:00:55.181928   71146 api_server.go:141] control plane version: v1.30.2
	I0717 02:00:55.181947   71146 api_server.go:131] duration metric: took 3.873068874s to wait for apiserver health ...
	I0717 02:00:55.181955   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:00:55.181975   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:55.182017   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:55.218028   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:55.218059   71146 cri.go:89] found id: ""
	I0717 02:00:55.218068   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:55.218125   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.222841   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:55.222911   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:55.265613   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.265638   71146 cri.go:89] found id: ""
	I0717 02:00:55.265647   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:55.265699   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.269866   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:55.269923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:55.306363   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:55.306390   71146 cri.go:89] found id: ""
	I0717 02:00:55.306400   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:55.306457   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.310843   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:55.310901   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:55.354417   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:55.354439   71146 cri.go:89] found id: ""
	I0717 02:00:55.354449   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:55.354503   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.358988   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:55.359038   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:55.396457   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.396480   71146 cri.go:89] found id: ""
	I0717 02:00:55.396488   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:55.396532   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.401185   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:55.401244   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:55.438249   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:55.438276   71146 cri.go:89] found id: ""
	I0717 02:00:55.438286   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:55.438344   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.442967   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:55.443048   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:55.484173   71146 cri.go:89] found id: ""
	I0717 02:00:55.484197   71146 logs.go:276] 0 containers: []
	W0717 02:00:55.484205   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:55.484210   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:55.484288   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:55.525757   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.525780   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.525784   71146 cri.go:89] found id: ""
	I0717 02:00:55.525790   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:55.525842   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.530253   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.534253   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:55.534275   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.578993   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:55.579018   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.622746   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:55.622771   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.660900   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:55.660931   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.709803   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:55.709833   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:56.092339   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:56.092390   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:56.130951   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:56.130976   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:56.186113   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:56.186152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:56.229794   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:56.229839   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:56.285798   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:56.285845   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:56.300391   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:56.300421   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:56.425621   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:56.425653   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:56.478853   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:56.478882   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:59.026000   71146 system_pods.go:59] 8 kube-system pods found
	I0717 02:00:59.026028   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.026033   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.026036   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.026039   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.026042   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.026045   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.026051   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.026054   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.026062   71146 system_pods.go:74] duration metric: took 3.844102201s to wait for pod list to return data ...
	I0717 02:00:59.026069   71146 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:00:59.028810   71146 default_sa.go:45] found service account: "default"
	I0717 02:00:59.028831   71146 default_sa.go:55] duration metric: took 2.756364ms for default service account to be created ...
	I0717 02:00:59.028838   71146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:00:59.036427   71146 system_pods.go:86] 8 kube-system pods found
	I0717 02:00:59.036457   71146 system_pods.go:89] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.036466   71146 system_pods.go:89] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.036474   71146 system_pods.go:89] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.036482   71146 system_pods.go:89] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.036489   71146 system_pods.go:89] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.036499   71146 system_pods.go:89] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.036509   71146 system_pods.go:89] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.036519   71146 system_pods.go:89] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.036532   71146 system_pods.go:126] duration metric: took 7.688074ms to wait for k8s-apps to be running ...
	I0717 02:00:59.036542   71146 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:00:59.036594   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:59.052023   71146 system_svc.go:56] duration metric: took 15.474441ms WaitForService to wait for kubelet
	I0717 02:00:59.052049   71146 kubeadm.go:582] duration metric: took 4m26.017816269s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:00:59.052073   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:00:59.054763   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:00:59.054784   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 02:00:59.054795   71146 node_conditions.go:105] duration metric: took 2.714349ms to run NodePressure ...
	I0717 02:00:59.054805   71146 start.go:241] waiting for startup goroutines ...
	I0717 02:00:59.054811   71146 start.go:246] waiting for cluster config update ...
	I0717 02:00:59.054824   71146 start.go:255] writing updated cluster config ...
	I0717 02:00:59.055069   71146 ssh_runner.go:195] Run: rm -f paused
	I0717 02:00:59.101243   71146 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 02:00:59.103341   71146 out.go:177] * Done! kubectl is now configured to use "embed-certs-940222" cluster and "default" namespace by default
	I0717 02:00:56.432853   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:58.433589   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:00.932978   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:02.933289   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:05.433003   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:07.433470   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:09.433795   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:11.933112   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:14.433274   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:16.932102   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:18.932904   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:20.933023   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:23.433153   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:24.926132   71603 pod_ready.go:81] duration metric: took 4m0.000155151s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	E0717 02:01:24.926165   71603 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 02:01:24.926185   71603 pod_ready.go:38] duration metric: took 4m39.916322674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:01:24.926214   71603 kubeadm.go:597] duration metric: took 5m27.432375382s to restartPrimaryControlPlane
	W0717 02:01:24.926303   71603 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:01:24.926339   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:01:51.790820   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.86445583s)
	I0717 02:01:51.790901   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:01:51.812968   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:01:51.835689   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:01:51.848832   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:01:51.848859   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 02:01:51.848911   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:01:51.876554   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:01:51.876620   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:01:51.891580   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:01:51.901279   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:01:51.901328   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:01:51.910994   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.920959   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:01:51.921020   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.931039   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:01:51.940496   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:01:51.940549   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:01:51.950455   71603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:01:51.999712   71603 kubeadm.go:310] W0717 02:01:51.966911    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.000573   71603 kubeadm.go:310] W0717 02:01:51.967749    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.132406   71603 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:02:01.065590   71603 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 02:02:01.065670   71603 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:01.065761   71603 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:01.065909   71603 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:01.066049   71603 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 02:02:01.066124   71603 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:01.067867   71603 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:01.067966   71603 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:01.068043   71603 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:01.068139   71603 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:01.068210   71603 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:01.068310   71603 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:01.068391   71603 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:01.068471   71603 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:01.068523   71603 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:01.068585   71603 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:01.068650   71603 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:01.068683   71603 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:01.068752   71603 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:01.068822   71603 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:01.068906   71603 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 02:02:01.068970   71603 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:01.069057   71603 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:01.069157   71603 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:01.069271   71603 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:01.069369   71603 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:01.070772   71603 out.go:204]   - Booting up control plane ...
	I0717 02:02:01.070883   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:01.070981   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:01.071088   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:01.071206   71603 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:01.071311   71603 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:01.071365   71603 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:01.071497   71603 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 02:02:01.071557   71603 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 02:02:01.071608   71603 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044041ms
	I0717 02:02:01.071663   71603 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 02:02:01.071725   71603 kubeadm.go:310] [api-check] The API server is healthy after 5.501034024s
	I0717 02:02:01.071823   71603 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 02:02:01.071926   71603 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 02:02:01.071975   71603 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 02:02:01.072168   71603 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-391501 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 02:02:01.072238   71603 kubeadm.go:310] [bootstrap-token] Using token: jhnlja.0tmcz1ce1lkie6op
	I0717 02:02:01.073965   71603 out.go:204]   - Configuring RBAC rules ...
	I0717 02:02:01.074091   71603 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 02:02:01.074223   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 02:02:01.074390   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 02:02:01.074597   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 02:02:01.074766   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 02:02:01.074887   71603 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 02:02:01.075058   71603 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 02:02:01.075126   71603 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 02:02:01.075195   71603 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 02:02:01.075204   71603 kubeadm.go:310] 
	I0717 02:02:01.075255   71603 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 02:02:01.075262   71603 kubeadm.go:310] 
	I0717 02:02:01.075372   71603 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 02:02:01.075386   71603 kubeadm.go:310] 
	I0717 02:02:01.075419   71603 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 02:02:01.075498   71603 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 02:02:01.075582   71603 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 02:02:01.075604   71603 kubeadm.go:310] 
	I0717 02:02:01.075687   71603 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 02:02:01.075697   71603 kubeadm.go:310] 
	I0717 02:02:01.075759   71603 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 02:02:01.075771   71603 kubeadm.go:310] 
	I0717 02:02:01.075834   71603 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 02:02:01.075936   71603 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 02:02:01.076034   71603 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 02:02:01.076043   71603 kubeadm.go:310] 
	I0717 02:02:01.076142   71603 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 02:02:01.076248   71603 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 02:02:01.076256   71603 kubeadm.go:310] 
	I0717 02:02:01.076379   71603 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076541   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 \
	I0717 02:02:01.076582   71603 kubeadm.go:310] 	--control-plane 
	I0717 02:02:01.076600   71603 kubeadm.go:310] 
	I0717 02:02:01.076708   71603 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 02:02:01.076719   71603 kubeadm.go:310] 
	I0717 02:02:01.076819   71603 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076955   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 
	I0717 02:02:01.076972   71603 cni.go:84] Creating CNI manager for ""
	I0717 02:02:01.076981   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 02:02:01.078801   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 02:02:01.080151   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 02:02:01.093210   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 02:02:01.116656   71603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 02:02:01.116712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.116756   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-391501 minikube.k8s.io/updated_at=2024_07_17T02_02_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=no-preload-391501 minikube.k8s.io/primary=true
	I0717 02:02:01.314407   71603 ops.go:34] apiserver oom_adj: -16
	I0717 02:02:01.314467   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.814693   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.315439   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.814676   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.314734   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.814702   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.315450   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.815112   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.315144   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.814712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.921356   71603 kubeadm.go:1113] duration metric: took 4.80469441s to wait for elevateKubeSystemPrivileges
	I0717 02:02:05.921398   71603 kubeadm.go:394] duration metric: took 6m8.48278775s to StartCluster
	I0717 02:02:05.921420   71603 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.921508   71603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 02:02:05.923844   71603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.924156   71603 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 02:02:05.924254   71603 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 02:02:05.924328   71603 addons.go:69] Setting storage-provisioner=true in profile "no-preload-391501"
	I0717 02:02:05.924357   71603 addons.go:234] Setting addon storage-provisioner=true in "no-preload-391501"
	I0717 02:02:05.924355   71603 addons.go:69] Setting default-storageclass=true in profile "no-preload-391501"
	I0717 02:02:05.924364   71603 addons.go:69] Setting metrics-server=true in profile "no-preload-391501"
	I0717 02:02:05.924391   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:02:05.924398   71603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-391501"
	I0717 02:02:05.924404   71603 addons.go:234] Setting addon metrics-server=true in "no-preload-391501"
	W0717 02:02:05.924414   71603 addons.go:243] addon metrics-server should already be in state true
	W0717 02:02:05.924368   71603 addons.go:243] addon storage-provisioner should already be in state true
	I0717 02:02:05.924447   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924460   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924801   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924827   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924834   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924850   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924874   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924912   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.926034   71603 out.go:177] * Verifying Kubernetes components...
	I0717 02:02:05.927316   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:02:05.941502   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0717 02:02:05.941716   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0717 02:02:05.941969   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942299   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942492   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942509   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942873   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942902   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942933   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943175   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.943250   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943555   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0717 02:02:05.943829   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.943862   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.943996   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.944648   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.944672   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.945037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.945577   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.945613   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.947058   71603 addons.go:234] Setting addon default-storageclass=true in "no-preload-391501"
	W0717 02:02:05.947076   71603 addons.go:243] addon default-storageclass should already be in state true
	I0717 02:02:05.947103   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.947419   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.947447   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.960183   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0717 02:02:05.960662   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.961220   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.961249   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.961532   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.961777   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.962531   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0717 02:02:05.963063   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.964115   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.964120   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.964146   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.965195   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.965777   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0717 02:02:05.965802   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.965845   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.966114   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.966615   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.966635   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.966706   71603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 02:02:05.967037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.967228   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.968069   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 02:02:05.968101   71603 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 02:02:05.968121   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.969421   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.971055   71603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 02:02:05.972019   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.972494   71603 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:05.972515   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 02:02:05.972533   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.972622   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.972646   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.973122   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.973289   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.973415   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.973638   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.975702   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976091   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.976110   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976376   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.976553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.976717   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.976866   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.983061   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44967
	I0717 02:02:05.983397   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.983851   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.983867   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.984150   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.984319   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.985757   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.985973   71603 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:05.985985   71603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 02:02:05.986000   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.989238   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989627   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.989647   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989890   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.990056   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.990212   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.990412   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:06.238449   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 02:02:06.272217   71603 node_ready.go:35] waiting up to 6m0s for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281012   71603 node_ready.go:49] node "no-preload-391501" has status "Ready":"True"
	I0717 02:02:06.281031   71603 node_ready.go:38] duration metric: took 8.787329ms for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281040   71603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:06.297250   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:06.386971   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 02:02:06.386995   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 02:02:06.439822   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:06.460362   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 02:02:06.460391   71603 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 02:02:06.468640   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:06.551454   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:06.551482   71603 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 02:02:06.727518   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:07.338701   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338778   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.338874   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338900   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339119   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339217   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339230   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339273   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339291   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339301   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339314   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339240   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339386   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339575   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339592   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339648   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339711   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339736   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.357948   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.357966   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.358197   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.358212   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.694612   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.694690   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695028   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.695109   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695122   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695148   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.695160   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695404   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695421   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695432   71603 addons.go:475] Verifying addon metrics-server=true in "no-preload-391501"
	I0717 02:02:07.698298   71603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 02:02:08.622411   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:02:08.622531   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:02:08.624111   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:08.624168   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:08.624265   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:08.624391   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:08.624526   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:08.624604   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:08.626394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:08.626478   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:08.626574   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:08.626657   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:08.626735   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:08.626830   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:08.626909   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:08.627001   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:08.627095   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:08.627203   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:08.627325   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:08.627392   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:08.627469   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:08.627573   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:08.627663   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:08.627753   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:08.627836   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:08.627997   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:08.628107   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:08.628179   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:08.628272   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:08.630262   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:08.630372   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:08.630477   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:08.630594   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:08.630729   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:08.630960   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:08.631020   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:08.631099   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631293   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631394   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631648   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631748   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631925   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632050   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632253   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632327   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632528   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632546   71929 kubeadm.go:310] 
	I0717 02:02:08.632611   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:02:08.632671   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:02:08.632689   71929 kubeadm.go:310] 
	I0717 02:02:08.632729   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:02:08.632772   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:02:08.632902   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:02:08.632914   71929 kubeadm.go:310] 
	I0717 02:02:08.633001   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:02:08.633030   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:02:08.633075   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:02:08.633092   71929 kubeadm.go:310] 
	I0717 02:02:08.633204   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:02:08.633281   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:02:08.633306   71929 kubeadm.go:310] 
	I0717 02:02:08.633450   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:02:08.633535   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:02:08.633597   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:02:08.633668   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:02:08.633697   71929 kubeadm.go:310] 
	W0717 02:02:08.633780   71929 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 02:02:08.633821   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:02:09.101394   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:09.119918   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:02:09.130974   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:02:09.131002   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:02:09.131046   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:02:09.142720   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:02:09.142790   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:02:09.154990   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:02:09.166317   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:02:09.166379   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:02:09.176756   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.186639   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:02:09.186697   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.196778   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:02:09.206420   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:02:09.206469   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:02:09.216325   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:02:09.293311   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:09.293457   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:09.442386   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:09.442594   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:09.442736   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:09.618387   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:07.699645   71603 addons.go:510] duration metric: took 1.775390854s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 02:02:08.305410   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:09.620394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:09.620496   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:09.620593   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:09.620691   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:09.620791   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:09.620909   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:09.621004   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:09.621117   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:09.621364   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:09.621778   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:09.622072   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:09.622135   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:09.622225   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:09.990964   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:10.434990   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:10.579785   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:10.723319   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:10.746923   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:10.748370   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:10.748460   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:10.888855   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:10.890727   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:10.890860   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:10.893530   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:10.894934   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:10.896825   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:10.899127   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:10.806868   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:12.804727   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.804754   71603 pod_ready.go:81] duration metric: took 6.507471417s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.804763   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812383   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.812408   71603 pod_ready.go:81] duration metric: took 7.638012ms for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812420   71603 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320241   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.320263   71603 pod_ready.go:81] duration metric: took 507.836128ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320285   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326308   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.326332   71603 pod_ready.go:81] duration metric: took 6.041207ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326341   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331310   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.331338   71603 pod_ready.go:81] duration metric: took 4.988207ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331360   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602634   71603 pod_ready.go:92] pod "kube-proxy-gl7th" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.602677   71603 pod_ready.go:81] duration metric: took 271.310877ms for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602687   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002256   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:14.002282   71603 pod_ready.go:81] duration metric: took 399.588324ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002290   71603 pod_ready.go:38] duration metric: took 7.721240931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:14.002306   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:02:14.002355   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:02:14.016981   71603 api_server.go:72] duration metric: took 8.092789001s to wait for apiserver process to appear ...
	I0717 02:02:14.017007   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:02:14.017026   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 02:02:14.022008   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 02:02:14.022992   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 02:02:14.023010   71603 api_server.go:131] duration metric: took 5.997297ms to wait for apiserver health ...
	I0717 02:02:14.023016   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:02:14.204777   71603 system_pods.go:59] 9 kube-system pods found
	I0717 02:02:14.204806   71603 system_pods.go:61] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.204811   71603 system_pods.go:61] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.204816   71603 system_pods.go:61] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.204819   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.204823   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.204826   71603 system_pods.go:61] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.204829   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.204836   71603 system_pods.go:61] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.204839   71603 system_pods.go:61] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.204847   71603 system_pods.go:74] duration metric: took 181.825073ms to wait for pod list to return data ...
	I0717 02:02:14.204854   71603 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:02:14.402964   71603 default_sa.go:45] found service account: "default"
	I0717 02:02:14.402992   71603 default_sa.go:55] duration metric: took 198.131224ms for default service account to be created ...
	I0717 02:02:14.403005   71603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:02:14.606371   71603 system_pods.go:86] 9 kube-system pods found
	I0717 02:02:14.606408   71603 system_pods.go:89] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.606418   71603 system_pods.go:89] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.606424   71603 system_pods.go:89] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.606430   71603 system_pods.go:89] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.606438   71603 system_pods.go:89] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.606444   71603 system_pods.go:89] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.606450   71603 system_pods.go:89] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.606461   71603 system_pods.go:89] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.606474   71603 system_pods.go:89] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.606486   71603 system_pods.go:126] duration metric: took 203.473728ms to wait for k8s-apps to be running ...
	I0717 02:02:14.606497   71603 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:02:14.606568   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:14.622178   71603 system_svc.go:56] duration metric: took 15.671962ms WaitForService to wait for kubelet
	I0717 02:02:14.622211   71603 kubeadm.go:582] duration metric: took 8.698021688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:02:14.622234   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:02:14.802282   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:02:14.802309   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 02:02:14.802319   71603 node_conditions.go:105] duration metric: took 180.080727ms to run NodePressure ...
	I0717 02:02:14.802330   71603 start.go:241] waiting for startup goroutines ...
	I0717 02:02:14.802337   71603 start.go:246] waiting for cluster config update ...
	I0717 02:02:14.802345   71603 start.go:255] writing updated cluster config ...
	I0717 02:02:14.802613   71603 ssh_runner.go:195] Run: rm -f paused
	I0717 02:02:14.848725   71603 start.go:600] kubectl: 1.30.2, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 02:02:14.850965   71603 out.go:177] * Done! kubectl is now configured to use "no-preload-391501" cluster and "default" namespace by default
	I0717 02:02:50.900829   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:50.901350   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:50.901626   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:55.902558   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:55.902805   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:05.903753   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:05.904033   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:25.905383   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:25.905597   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906576   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:04:05.906960   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906992   71929 kubeadm.go:310] 
	I0717 02:04:05.907049   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:04:05.907133   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:04:05.907182   71929 kubeadm.go:310] 
	I0717 02:04:05.907252   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:04:05.907339   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:04:05.907516   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:04:05.907529   71929 kubeadm.go:310] 
	I0717 02:04:05.907661   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:04:05.907699   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:04:05.907743   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:04:05.907751   71929 kubeadm.go:310] 
	I0717 02:04:05.907907   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:04:05.908043   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:04:05.908053   71929 kubeadm.go:310] 
	I0717 02:04:05.908221   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:04:05.908435   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:04:05.908619   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:04:05.908738   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:04:05.908788   71929 kubeadm.go:310] 
	I0717 02:04:05.909079   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:04:05.909286   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:04:05.909452   71929 kubeadm.go:394] duration metric: took 7m58.01930975s to StartCluster
	I0717 02:04:05.909455   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:04:05.909494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:04:05.909552   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:04:05.952911   71929 cri.go:89] found id: ""
	I0717 02:04:05.952937   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.952949   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:04:05.952957   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:04:05.953026   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:04:05.988490   71929 cri.go:89] found id: ""
	I0717 02:04:05.988518   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.988529   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:04:05.988537   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:04:05.988593   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:04:06.025228   71929 cri.go:89] found id: ""
	I0717 02:04:06.025259   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.025269   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:04:06.025277   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:04:06.025342   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:04:06.060563   71929 cri.go:89] found id: ""
	I0717 02:04:06.060589   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.060599   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:04:06.060604   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:04:06.060660   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:04:06.095051   71929 cri.go:89] found id: ""
	I0717 02:04:06.095079   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.095091   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:04:06.095099   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:04:06.095150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:04:06.131892   71929 cri.go:89] found id: ""
	I0717 02:04:06.131914   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.131921   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:04:06.131927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:04:06.131973   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:04:06.168893   71929 cri.go:89] found id: ""
	I0717 02:04:06.168919   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.168930   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:04:06.168937   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:04:06.168995   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:04:06.206635   71929 cri.go:89] found id: ""
	I0717 02:04:06.206658   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.206668   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:04:06.206679   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:04:06.206693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:04:06.308601   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:04:06.308624   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:04:06.308637   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:04:06.422081   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:04:06.422116   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:04:06.467466   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:04:06.467496   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:04:06.521420   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:04:06.521457   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0717 02:04:06.535167   71929 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 02:04:06.535211   71929 out.go:239] * 
	W0717 02:04:06.535263   71929 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.535292   71929 out.go:239] * 
	W0717 02:04:06.536098   71929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:04:06.539314   71929 out.go:177] 
	W0717 02:04:06.540504   71929 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.540557   71929 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 02:04:06.540579   71929 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 02:04:06.541888   71929 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.761393809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182391761364562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb49d948-1763-4fa3-9390-711df9089a82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.762011594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38e621e0-86a4-4a9e-a585-d5dc7d2f5860 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.762067736Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38e621e0-86a4-4a9e-a585-d5dc7d2f5860 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.762106672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=38e621e0-86a4-4a9e-a585-d5dc7d2f5860 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.793939143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71cb494d-5d1c-42ae-8d11-6c612f4293ed name=/runtime.v1.RuntimeService/Version
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.794023125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71cb494d-5d1c-42ae-8d11-6c612f4293ed name=/runtime.v1.RuntimeService/Version
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.795218909Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=300c1984-a7e7-44ec-8edc-ec58459a2088 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.795644945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182391795626328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=300c1984-a7e7-44ec-8edc-ec58459a2088 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.796143387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=517e0ec3-7bb8-4719-bd46-20c78fccec92 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.796189173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=517e0ec3-7bb8-4719-bd46-20c78fccec92 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.796225166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=517e0ec3-7bb8-4719-bd46-20c78fccec92 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.826800212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c815bae-80a9-4230-9041-cd70e4d5a9d6 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.826902125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c815bae-80a9-4230-9041-cd70e4d5a9d6 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.827690439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=654cbba0-4639-4892-a574-be76b03798c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.828069703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182391828050770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=654cbba0-4639-4892-a574-be76b03798c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.828754653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0ecf7c0-8a6a-4909-af6a-23808bb57cab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.828825692Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0ecf7c0-8a6a-4909-af6a-23808bb57cab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.828864471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c0ecf7c0-8a6a-4909-af6a-23808bb57cab name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.861018609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93d1a619-ede6-495f-8e66-7cbe51c73c8b name=/runtime.v1.RuntimeService/Version
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.861110751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93d1a619-ede6-495f-8e66-7cbe51c73c8b name=/runtime.v1.RuntimeService/Version
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.862235932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5771f31-ea72-45b8-9ef4-7f62cd76ea53 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.862724481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182391862700268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5771f31-ea72-45b8-9ef4-7f62cd76ea53 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.863220836Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6a85571-5fab-40cc-b7a9-17424c390247 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.863341713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6a85571-5fab-40cc-b7a9-17424c390247 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:13:11 old-k8s-version-901761 crio[644]: time="2024-07-17 02:13:11.863377342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c6a85571-5fab-40cc-b7a9-17424c390247 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 01:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060095] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.053379] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.696762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.451232] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600989] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.496189] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.063928] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058024] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.198095] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.159661] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.276256] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[Jul17 01:56] systemd-fstab-generator[830]: Ignoring "noauto" option for root device
	[  +0.060021] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.876226] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[ +12.568707] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 02:00] systemd-fstab-generator[5018]: Ignoring "noauto" option for root device
	[Jul17 02:02] systemd-fstab-generator[5295]: Ignoring "noauto" option for root device
	[  +0.065589] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:13:12 up 17 min,  0 users,  load average: 0.05, 0.03, 0.00
	Linux old-k8s-version-901761 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]: goroutine 151 [chan receive]:
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000bd0120)
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]: goroutine 152 [select]:
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b63ef0, 0x4f0ac20, 0xc000a938b0, 0x1, 0xc0001000c0)
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d9180, 0xc0001000c0)
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bc83d0, 0xc000bb2c20)
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 17 02:13:06 old-k8s-version-901761 kubelet[6479]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 17 02:13:07 old-k8s-version-901761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 17 02:13:07 old-k8s-version-901761 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 02:13:07 old-k8s-version-901761 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 02:13:07 old-k8s-version-901761 kubelet[6487]: I0717 02:13:07.576833    6487 server.go:416] Version: v1.20.0
	Jul 17 02:13:07 old-k8s-version-901761 kubelet[6487]: I0717 02:13:07.577140    6487 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 02:13:07 old-k8s-version-901761 kubelet[6487]: I0717 02:13:07.579215    6487 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 02:13:07 old-k8s-version-901761 kubelet[6487]: W0717 02:13:07.580106    6487 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 02:13:07 old-k8s-version-901761 kubelet[6487]: I0717 02:13:07.580487    6487 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-901761 -n old-k8s-version-901761
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 2 (231.611624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-901761" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (507.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 02:17:25.626607415 +0000 UTC m=+6944.674488534
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-738184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-738184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.621µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-738184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-738184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-738184 logs -n 25: (1.17143123s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-391501             | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-940222                 | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-901761        | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 02:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-738184       | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-391501                  | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:59 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-391501 --memory=2200                     | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 02:02 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901761             | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 02:15 UTC | 17 Jul 24 02:15 UTC |
	| start   | -p newest-cni-386113 --memory=2200 --alsologtostderr   | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:15 UTC | 17 Jul 24 02:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 02:15 UTC | 17 Jul 24 02:15 UTC |
	| addons  | enable metrics-server -p newest-cni-386113             | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:16 UTC | 17 Jul 24 02:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-386113                                   | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:16 UTC | 17 Jul 24 02:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-386113                  | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:16 UTC | 17 Jul 24 02:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-386113 --memory=2200 --alsologtostderr   | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:16 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 02:16 UTC | 17 Jul 24 02:16 UTC |
	| image   | newest-cni-386113 image list                           | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:16 UTC | 17 Jul 24 02:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-386113                                   | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:17 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-386113                                   | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:17 UTC | 17 Jul 24 02:17 UTC |
	| delete  | -p newest-cni-386113                                   | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:17 UTC | 17 Jul 24 02:17 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 02:16:33
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 02:16:33.664357   78861 out.go:291] Setting OutFile to fd 1 ...
	I0717 02:16:33.664449   78861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:16:33.664456   78861 out.go:304] Setting ErrFile to fd 2...
	I0717 02:16:33.664460   78861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:16:33.664627   78861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 02:16:33.665135   78861 out.go:298] Setting JSON to false
	I0717 02:16:33.665986   78861 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7136,"bootTime":1721175458,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 02:16:33.666038   78861 start.go:139] virtualization: kvm guest
	I0717 02:16:33.668138   78861 out.go:177] * [newest-cni-386113] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 02:16:33.669586   78861 notify.go:220] Checking for updates...
	I0717 02:16:33.669608   78861 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 02:16:33.671025   78861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 02:16:33.672727   78861 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 02:16:33.674166   78861 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 02:16:33.675622   78861 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 02:16:33.677043   78861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 02:16:33.678758   78861 config.go:182] Loaded profile config "newest-cni-386113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:16:33.679232   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:33.679275   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:33.694847   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33475
	I0717 02:16:33.695238   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:33.695845   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:33.695867   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:33.696161   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:33.696356   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:33.696601   78861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 02:16:33.696880   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:33.696919   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:33.711749   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0717 02:16:33.712173   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:33.712717   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:33.712735   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:33.713205   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:33.713446   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:33.749065   78861 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 02:16:33.750444   78861 start.go:297] selected driver: kvm2
	I0717 02:16:33.750456   78861 start.go:901] validating driver "kvm2" against &{Name:newest-cni-386113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-386113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:16:33.750577   78861 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 02:16:33.751254   78861 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 02:16:33.751314   78861 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 02:16:33.766259   78861 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 02:16:33.766639   78861 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 02:16:33.766666   78861 cni.go:84] Creating CNI manager for ""
	I0717 02:16:33.766673   78861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 02:16:33.766710   78861 start.go:340] cluster config:
	{Name:newest-cni-386113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-386113 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:16:33.766806   78861 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 02:16:33.768749   78861 out.go:177] * Starting "newest-cni-386113" primary control-plane node in "newest-cni-386113" cluster
	I0717 02:16:33.769983   78861 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 02:16:33.770010   78861 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 02:16:33.770017   78861 cache.go:56] Caching tarball of preloaded images
	I0717 02:16:33.770097   78861 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 02:16:33.770111   78861 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 02:16:33.770204   78861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/newest-cni-386113/config.json ...
	I0717 02:16:33.770367   78861 start.go:360] acquireMachinesLock for newest-cni-386113: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 02:16:33.770407   78861 start.go:364] duration metric: took 22.027µs to acquireMachinesLock for "newest-cni-386113"
	I0717 02:16:33.770425   78861 start.go:96] Skipping create...Using existing machine configuration
	I0717 02:16:33.770433   78861 fix.go:54] fixHost starting: 
	I0717 02:16:33.770726   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:33.770771   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:33.787241   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I0717 02:16:33.787726   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:33.788321   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:33.788341   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:33.788689   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:33.788891   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:33.789067   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetState
	I0717 02:16:33.790614   78861 fix.go:112] recreateIfNeeded on newest-cni-386113: state=Stopped err=<nil>
	I0717 02:16:33.790649   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	W0717 02:16:33.790810   78861 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 02:16:33.793055   78861 out.go:177] * Restarting existing kvm2 VM for "newest-cni-386113" ...
	I0717 02:16:33.794666   78861 main.go:141] libmachine: (newest-cni-386113) Calling .Start
	I0717 02:16:33.794840   78861 main.go:141] libmachine: (newest-cni-386113) Ensuring networks are active...
	I0717 02:16:33.795550   78861 main.go:141] libmachine: (newest-cni-386113) Ensuring network default is active
	I0717 02:16:33.795910   78861 main.go:141] libmachine: (newest-cni-386113) Ensuring network mk-newest-cni-386113 is active
	I0717 02:16:33.796307   78861 main.go:141] libmachine: (newest-cni-386113) Getting domain xml...
	I0717 02:16:33.796893   78861 main.go:141] libmachine: (newest-cni-386113) Creating domain...
	I0717 02:16:35.003495   78861 main.go:141] libmachine: (newest-cni-386113) Waiting to get IP...
	I0717 02:16:35.004405   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:35.004811   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:35.004888   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:35.004821   78895 retry.go:31] will retry after 246.296142ms: waiting for machine to come up
	I0717 02:16:35.252230   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:35.252787   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:35.252828   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:35.252747   78895 retry.go:31] will retry after 319.046324ms: waiting for machine to come up
	I0717 02:16:35.573136   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:35.573533   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:35.573556   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:35.573490   78895 retry.go:31] will retry after 352.340084ms: waiting for machine to come up
	I0717 02:16:35.926908   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:35.927427   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:35.927456   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:35.927383   78895 retry.go:31] will retry after 420.053145ms: waiting for machine to come up
	I0717 02:16:36.349018   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:36.349474   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:36.349505   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:36.349418   78895 retry.go:31] will retry after 474.535661ms: waiting for machine to come up
	I0717 02:16:36.825920   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:36.826521   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:36.826544   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:36.826463   78895 retry.go:31] will retry after 862.224729ms: waiting for machine to come up
	I0717 02:16:37.690326   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:37.690972   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:37.690998   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:37.690810   78895 retry.go:31] will retry after 1.119857631s: waiting for machine to come up
	I0717 02:16:38.812589   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:38.814233   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:38.814264   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:38.814190   78895 retry.go:31] will retry after 1.132154413s: waiting for machine to come up
	I0717 02:16:39.947906   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:39.948356   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:39.948382   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:39.948317   78895 retry.go:31] will retry after 1.85893584s: waiting for machine to come up
	I0717 02:16:41.809508   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:41.810006   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:41.810034   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:41.809950   78895 retry.go:31] will retry after 1.472485012s: waiting for machine to come up
	I0717 02:16:43.284693   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:43.285226   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:43.285254   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:43.285185   78895 retry.go:31] will retry after 1.846125187s: waiting for machine to come up
	I0717 02:16:45.133096   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:45.133545   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:45.133574   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:45.133489   78895 retry.go:31] will retry after 2.958242893s: waiting for machine to come up
	I0717 02:16:48.092988   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:48.093437   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:48.093477   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:48.093392   78895 retry.go:31] will retry after 4.488434068s: waiting for machine to come up
	I0717 02:16:52.583095   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.583530   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has current primary IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.583548   78861 main.go:141] libmachine: (newest-cni-386113) Found IP for machine: 192.168.50.112
	I0717 02:16:52.583562   78861 main.go:141] libmachine: (newest-cni-386113) Reserving static IP address...
	I0717 02:16:52.584040   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "newest-cni-386113", mac: "52:54:00:b3:8c:c1", ip: "192.168.50.112"} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.584059   78861 main.go:141] libmachine: (newest-cni-386113) Reserved static IP address: 192.168.50.112
	I0717 02:16:52.584071   78861 main.go:141] libmachine: (newest-cni-386113) DBG | skip adding static IP to network mk-newest-cni-386113 - found existing host DHCP lease matching {name: "newest-cni-386113", mac: "52:54:00:b3:8c:c1", ip: "192.168.50.112"}
	I0717 02:16:52.584081   78861 main.go:141] libmachine: (newest-cni-386113) DBG | Getting to WaitForSSH function...
	I0717 02:16:52.584090   78861 main.go:141] libmachine: (newest-cni-386113) Waiting for SSH to be available...
	I0717 02:16:52.586246   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.586535   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.586586   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.586669   78861 main.go:141] libmachine: (newest-cni-386113) DBG | Using SSH client type: external
	I0717 02:16:52.586695   78861 main.go:141] libmachine: (newest-cni-386113) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/id_rsa (-rw-------)
	I0717 02:16:52.586728   78861 main.go:141] libmachine: (newest-cni-386113) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 02:16:52.586740   78861 main.go:141] libmachine: (newest-cni-386113) DBG | About to run SSH command:
	I0717 02:16:52.586748   78861 main.go:141] libmachine: (newest-cni-386113) DBG | exit 0
	I0717 02:16:52.714731   78861 main.go:141] libmachine: (newest-cni-386113) DBG | SSH cmd err, output: <nil>: 
	I0717 02:16:52.715163   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetConfigRaw
	I0717 02:16:52.715809   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetIP
	I0717 02:16:52.718723   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.719123   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.719157   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.719415   78861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/newest-cni-386113/config.json ...
	I0717 02:16:52.719618   78861 machine.go:94] provisionDockerMachine start ...
	I0717 02:16:52.719636   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:52.719850   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:52.722376   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.722710   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.722737   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.722880   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:52.723032   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.723207   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.723312   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:52.723509   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:52.723798   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:52.723816   78861 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 02:16:52.834849   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 02:16:52.834874   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:52.835120   78861 buildroot.go:166] provisioning hostname "newest-cni-386113"
	I0717 02:16:52.835148   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:52.835338   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:52.837964   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.838286   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.838320   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.838412   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:52.838602   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.838751   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.838915   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:52.839087   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:52.839321   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:52.839340   78861 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-386113 && echo "newest-cni-386113" | sudo tee /etc/hostname
	I0717 02:16:52.965401   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-386113
	
	I0717 02:16:52.965428   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:52.968158   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.968461   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.968494   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.968636   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:52.968841   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.969057   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.969245   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:52.969409   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:52.969578   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:52.969593   78861 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-386113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-386113/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-386113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 02:16:53.092635   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 02:16:53.092661   78861 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 02:16:53.092716   78861 buildroot.go:174] setting up certificates
	I0717 02:16:53.092728   78861 provision.go:84] configureAuth start
	I0717 02:16:53.092738   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:53.093075   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetIP
	I0717 02:16:53.095810   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.096182   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:53.096207   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.096300   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:53.098626   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.098980   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:53.099009   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.099217   78861 provision.go:143] copyHostCerts
	I0717 02:16:53.099310   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 02:16:53.099326   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 02:16:53.099399   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 02:16:53.099528   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 02:16:53.099540   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 02:16:53.099586   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 02:16:53.099699   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 02:16:53.099708   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 02:16:53.099740   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 02:16:53.099822   78861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.newest-cni-386113 san=[127.0.0.1 192.168.50.112 localhost minikube newest-cni-386113]
	I0717 02:16:53.185051   78861 provision.go:177] copyRemoteCerts
	I0717 02:16:53.185125   78861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 02:16:53.185159   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:53.188300   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.188693   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:53.188726   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.188840   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:53.189035   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:53.189244   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:53.189409   78861 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/id_rsa Username:docker}
	I0717 02:16:53.277553   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 02:16:53.303495   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 02:16:53.330799   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 02:16:53.355782   78861 provision.go:87] duration metric: took 263.042459ms to configureAuth
	I0717 02:16:53.355810   78861 buildroot.go:189] setting minikube options for container-runtime
	I0717 02:16:53.356082   78861 config.go:182] Loaded profile config "newest-cni-386113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:16:53.356163   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:53.358608   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.358958   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:53.358987   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.359134   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:53.359315   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:53.359486   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:53.359618   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:53.359777   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:53.359926   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:53.359940   78861 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 02:16:53.540166   78861 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0717 02:16:53.540199   78861 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0717 02:16:53.540211   78861 machine.go:97] duration metric: took 820.581159ms to provisionDockerMachine
	I0717 02:16:53.540238   78861 fix.go:56] duration metric: took 19.769804127s for fixHost
	I0717 02:16:53.540246   78861 start.go:83] releasing machines lock for "newest-cni-386113", held for 19.769827497s
	W0717 02:16:53.540271   78861 start.go:714] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	W0717 02:16:53.540400   78861 out.go:239] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0717 02:16:53.540418   78861 start.go:729] Will try again in 5 seconds ...
	I0717 02:16:58.544971   78861 start.go:360] acquireMachinesLock for newest-cni-386113: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 02:16:58.545077   78861 start.go:364] duration metric: took 51.633µs to acquireMachinesLock for "newest-cni-386113"
	I0717 02:16:58.545096   78861 start.go:96] Skipping create...Using existing machine configuration
	I0717 02:16:58.545104   78861 fix.go:54] fixHost starting: 
	I0717 02:16:58.545398   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:58.545429   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:58.560097   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33837
	I0717 02:16:58.560605   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:58.561105   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:58.561121   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:58.561427   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:58.561629   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:58.561783   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetState
	I0717 02:16:58.563502   78861 fix.go:112] recreateIfNeeded on newest-cni-386113: state=Running err=<nil>
	W0717 02:16:58.563520   78861 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 02:16:58.565319   78861 out.go:177] * Updating the running kvm2 "newest-cni-386113" VM ...
	I0717 02:16:58.566697   78861 machine.go:94] provisionDockerMachine start ...
	I0717 02:16:58.566720   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:58.566902   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:58.569308   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.569691   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:58.569718   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.569817   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:58.569965   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.570117   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.570227   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:58.570400   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:58.570567   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:58.570581   78861 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 02:16:58.687141   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-386113
	
	I0717 02:16:58.687172   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:58.687397   78861 buildroot.go:166] provisioning hostname "newest-cni-386113"
	I0717 02:16:58.687421   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:58.687600   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:58.689806   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.690088   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:58.690114   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.690287   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:58.690454   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.690586   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.690722   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:58.690888   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:58.691060   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:58.691076   78861 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-386113 && echo "newest-cni-386113" | sudo tee /etc/hostname
	I0717 02:16:58.817937   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-386113
	
	I0717 02:16:58.817988   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:58.820660   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.820992   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:58.821015   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.821204   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:58.821406   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.821555   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.821666   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:58.821804   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:58.821971   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:58.821986   78861 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-386113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-386113/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-386113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 02:16:58.939320   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 02:16:58.939348   78861 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 02:16:58.939369   78861 buildroot.go:174] setting up certificates
	I0717 02:16:58.939380   78861 provision.go:84] configureAuth start
	I0717 02:16:58.939391   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:58.939618   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetIP
	I0717 02:16:58.942050   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.942346   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:58.942373   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.942503   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:58.944829   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.945147   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:58.945173   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.945296   78861 provision.go:143] copyHostCerts
	I0717 02:16:58.945352   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 02:16:58.945361   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 02:16:58.945413   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 02:16:58.945491   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 02:16:58.945498   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 02:16:58.945517   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 02:16:58.945563   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 02:16:58.945569   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 02:16:58.945586   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 02:16:58.945628   78861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.newest-cni-386113 san=[127.0.0.1 192.168.50.112 localhost minikube newest-cni-386113]
	I0717 02:16:59.295441   78861 provision.go:177] copyRemoteCerts
	I0717 02:16:59.295500   78861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 02:16:59.295538   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:59.298262   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:59.298616   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:59.298650   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:59.298814   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:59.299041   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:59.299242   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:59.299388   78861 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/id_rsa Username:docker}
	I0717 02:16:59.385173   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 02:16:59.409457   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 02:16:59.432649   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 02:16:59.456653   78861 provision.go:87] duration metric: took 517.259084ms to configureAuth
	I0717 02:16:59.456687   78861 buildroot.go:189] setting minikube options for container-runtime
	I0717 02:16:59.456873   78861 config.go:182] Loaded profile config "newest-cni-386113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:16:59.456951   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:59.459938   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:59.460310   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:59.460340   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:59.460545   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:59.460743   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:59.460975   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:59.461135   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:59.461322   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:59.461487   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:59.461501   78861 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 02:16:59.645202   78861 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0717 02:16:59.645233   78861 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0717 02:16:59.645243   78861 machine.go:97] duration metric: took 1.07853223s to provisionDockerMachine
	I0717 02:16:59.645273   78861 fix.go:56] duration metric: took 1.100163436s for fixHost
	I0717 02:16:59.645279   78861 start.go:83] releasing machines lock for "newest-cni-386113", held for 1.100194017s
	W0717 02:16:59.645358   78861 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p newest-cni-386113" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0717 02:16:59.647778   78861 out.go:177] 
	W0717 02:16:59.649249   78861 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0717 02:16:59.649265   78861 out.go:239] * 
	W0717 02:16:59.650065   78861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:16:59.652033   78861 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.217038457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182646217012407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12a20947-6778-4e12-b2c1-3cf2d7b91a9f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.217629560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83702b74-0be0-4301-af58-8691a8f6f856 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.217679751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83702b74-0be0-4301-af58-8691a8f6f856 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.218028746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181362216896995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0be2dc67deee2847d02f59a5746918c97701700b6b27134a02a269cac1586bbf,PodSandboxId:d06c9b928557d4f3a4ca039be6b890e245c00b01b020554341b42c8239092606,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181344267252116,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 593e2c6d-7dfd-4341-8cd6-a6555c12c9bb,},Annotations:map[string]string{io.kubernetes.container.hash: d8259a70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7,PodSandboxId:59bd5ed033be981dc6c17d90ad14d07ac1cc3e31305111865bf170d7fe9a8ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339228693585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9w26c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 530f4d52-5fdc-47c4-8919-44430bf71e05,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd2b4ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013,PodSandboxId:6136ee902a2ec034a8aa4e7d8a8de84dfd8d0b1a2028d50f0efa468da89169c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339152224261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-js7sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3951c5
-d98d-4221-b71c-fc4f548b31d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1a00847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a,PodSandboxId:e32a832bea1dbd830204601544d55df47444e723a004aff84dddf1a3c6d36bb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172118
1331378855399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4n94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97eee4e8-4f36-412f-9064-57515ab6e932,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc42ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181331344560383,Labels:map[
string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3,PodSandboxId:428c5ac8a796afe703f57aa8f82b79783581de24e6b7d887b059fe7a9f899b4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181327691830390,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39962f541570a252c45496cd3715709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9,PodSandboxId:81b79012396cb1c0be9793763c4d7e2ed7856b09af28b1973e3a50079138b7e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181327626095989,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3649afd50bb96c296085b2238c924507,},Annotations:map[string]string{io.kubernetes.container.hash: 912f1836,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82,PodSandboxId:8f45848e9ebeec988c932147cd63dd1dd530ad5cb1b5124794a956075fac8995,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181327660667355,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 932f794983dfeb2dd7ccb21ae9543905,},Annotations:map[string]string{io.kubernetes.container.hash: ac850e24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8,PodSandboxId:a4cd441196dd3b633680466f7e7129bc15786c47db97d1de90a11a73f0582b8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181327637093645,Labels:map[string]string{io.kubernete
s.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc195369d4468cfffebee038dc12bf0e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83702b74-0be0-4301-af58-8691a8f6f856 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.253909041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=476826a4-7c81-4188-96c3-8a3ee33077e7 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.253983447Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=476826a4-7c81-4188-96c3-8a3ee33077e7 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.255022752Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b033a1d-bdd8-40af-90e2-1d9bbbe641a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.255620982Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182646255592155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b033a1d-bdd8-40af-90e2-1d9bbbe641a6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.256101159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1dbe995a-4c6f-45c5-bf03-422be4cc00a8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.256151472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1dbe995a-4c6f-45c5-bf03-422be4cc00a8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.256355225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181362216896995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0be2dc67deee2847d02f59a5746918c97701700b6b27134a02a269cac1586bbf,PodSandboxId:d06c9b928557d4f3a4ca039be6b890e245c00b01b020554341b42c8239092606,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181344267252116,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 593e2c6d-7dfd-4341-8cd6-a6555c12c9bb,},Annotations:map[string]string{io.kubernetes.container.hash: d8259a70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7,PodSandboxId:59bd5ed033be981dc6c17d90ad14d07ac1cc3e31305111865bf170d7fe9a8ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339228693585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9w26c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 530f4d52-5fdc-47c4-8919-44430bf71e05,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd2b4ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013,PodSandboxId:6136ee902a2ec034a8aa4e7d8a8de84dfd8d0b1a2028d50f0efa468da89169c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339152224261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-js7sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3951c5
-d98d-4221-b71c-fc4f548b31d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1a00847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a,PodSandboxId:e32a832bea1dbd830204601544d55df47444e723a004aff84dddf1a3c6d36bb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172118
1331378855399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4n94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97eee4e8-4f36-412f-9064-57515ab6e932,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc42ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181331344560383,Labels:map[
string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3,PodSandboxId:428c5ac8a796afe703f57aa8f82b79783581de24e6b7d887b059fe7a9f899b4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181327691830390,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39962f541570a252c45496cd3715709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9,PodSandboxId:81b79012396cb1c0be9793763c4d7e2ed7856b09af28b1973e3a50079138b7e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181327626095989,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3649afd50bb96c296085b2238c924507,},Annotations:map[string]string{io.kubernetes.container.hash: 912f1836,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82,PodSandboxId:8f45848e9ebeec988c932147cd63dd1dd530ad5cb1b5124794a956075fac8995,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181327660667355,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 932f794983dfeb2dd7ccb21ae9543905,},Annotations:map[string]string{io.kubernetes.container.hash: ac850e24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8,PodSandboxId:a4cd441196dd3b633680466f7e7129bc15786c47db97d1de90a11a73f0582b8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181327637093645,Labels:map[string]string{io.kubernete
s.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc195369d4468cfffebee038dc12bf0e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1dbe995a-4c6f-45c5-bf03-422be4cc00a8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.301358840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d73cb06-45bd-4442-8a14-2b7f4b354dfa name=/runtime.v1.RuntimeService/Version
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.301516328Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d73cb06-45bd-4442-8a14-2b7f4b354dfa name=/runtime.v1.RuntimeService/Version
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.302821089Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f7c146f-8836-40da-a8d2-2b4fe9781583 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.303198869Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182646303178394,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f7c146f-8836-40da-a8d2-2b4fe9781583 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.303761135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01beaf54-2b46-4847-a7c1-f1dd2e0028ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.303815153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01beaf54-2b46-4847-a7c1-f1dd2e0028ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.304019781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181362216896995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0be2dc67deee2847d02f59a5746918c97701700b6b27134a02a269cac1586bbf,PodSandboxId:d06c9b928557d4f3a4ca039be6b890e245c00b01b020554341b42c8239092606,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181344267252116,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 593e2c6d-7dfd-4341-8cd6-a6555c12c9bb,},Annotations:map[string]string{io.kubernetes.container.hash: d8259a70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7,PodSandboxId:59bd5ed033be981dc6c17d90ad14d07ac1cc3e31305111865bf170d7fe9a8ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339228693585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9w26c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 530f4d52-5fdc-47c4-8919-44430bf71e05,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd2b4ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013,PodSandboxId:6136ee902a2ec034a8aa4e7d8a8de84dfd8d0b1a2028d50f0efa468da89169c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339152224261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-js7sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3951c5
-d98d-4221-b71c-fc4f548b31d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1a00847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a,PodSandboxId:e32a832bea1dbd830204601544d55df47444e723a004aff84dddf1a3c6d36bb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172118
1331378855399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4n94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97eee4e8-4f36-412f-9064-57515ab6e932,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc42ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181331344560383,Labels:map[
string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3,PodSandboxId:428c5ac8a796afe703f57aa8f82b79783581de24e6b7d887b059fe7a9f899b4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181327691830390,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39962f541570a252c45496cd3715709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9,PodSandboxId:81b79012396cb1c0be9793763c4d7e2ed7856b09af28b1973e3a50079138b7e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181327626095989,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3649afd50bb96c296085b2238c924507,},Annotations:map[string]string{io.kubernetes.container.hash: 912f1836,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82,PodSandboxId:8f45848e9ebeec988c932147cd63dd1dd530ad5cb1b5124794a956075fac8995,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181327660667355,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 932f794983dfeb2dd7ccb21ae9543905,},Annotations:map[string]string{io.kubernetes.container.hash: ac850e24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8,PodSandboxId:a4cd441196dd3b633680466f7e7129bc15786c47db97d1de90a11a73f0582b8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181327637093645,Labels:map[string]string{io.kubernete
s.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc195369d4468cfffebee038dc12bf0e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01beaf54-2b46-4847-a7c1-f1dd2e0028ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.338703516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02c25367-3544-48e4-a02a-0d10cafc9beb name=/runtime.v1.RuntimeService/Version
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.338777015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02c25367-3544-48e4-a02a-0d10cafc9beb name=/runtime.v1.RuntimeService/Version
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.340016131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44e82f15-ff67-412a-8c93-0b8a31b8af32 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.340732650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182646340705497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44e82f15-ff67-412a-8c93-0b8a31b8af32 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.341519719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eeee61fb-75f6-4589-b5b1-4c23d0764abf name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.341577272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eeee61fb-75f6-4589-b5b1-4c23d0764abf name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:17:26 default-k8s-diff-port-738184 crio[748]: time="2024-07-17 02:17:26.341919973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181362216896995,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0be2dc67deee2847d02f59a5746918c97701700b6b27134a02a269cac1586bbf,PodSandboxId:d06c9b928557d4f3a4ca039be6b890e245c00b01b020554341b42c8239092606,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181344267252116,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 593e2c6d-7dfd-4341-8cd6-a6555c12c9bb,},Annotations:map[string]string{io.kubernetes.container.hash: d8259a70,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7,PodSandboxId:59bd5ed033be981dc6c17d90ad14d07ac1cc3e31305111865bf170d7fe9a8ca0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339228693585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9w26c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 530f4d52-5fdc-47c4-8919-44430bf71e05,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd2b4ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013,PodSandboxId:6136ee902a2ec034a8aa4e7d8a8de84dfd8d0b1a2028d50f0efa468da89169c9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181339152224261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-js7sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3951c5
-d98d-4221-b71c-fc4f548b31d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1a00847,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a,PodSandboxId:e32a832bea1dbd830204601544d55df47444e723a004aff84dddf1a3c6d36bb2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172118
1331378855399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c4n94,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97eee4e8-4f36-412f-9064-57515ab6e932,},Annotations:map[string]string{io.kubernetes.container.hash: bfdc42ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299,PodSandboxId:0c52e1c863ab787323d368c446319f3da163e86b52560c7ff6fa52e5afb4a4d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181331344560383,Labels:map[
string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36904ec-ef3f-4aee-9276-fe1285e10876,},Annotations:map[string]string{io.kubernetes.container.hash: d368cf8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3,PodSandboxId:428c5ac8a796afe703f57aa8f82b79783581de24e6b7d887b059fe7a9f899b4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181327691830390,Labels:map[string]s
tring{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39962f541570a252c45496cd3715709a,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9,PodSandboxId:81b79012396cb1c0be9793763c4d7e2ed7856b09af28b1973e3a50079138b7e1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181327626095989,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3649afd50bb96c296085b2238c924507,},Annotations:map[string]string{io.kubernetes.container.hash: 912f1836,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82,PodSandboxId:8f45848e9ebeec988c932147cd63dd1dd530ad5cb1b5124794a956075fac8995,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181327660667355,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 932f794983dfeb2dd7ccb21ae9543905,},Annotations:map[string]string{io.kubernetes.container.hash: ac850e24,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8,PodSandboxId:a4cd441196dd3b633680466f7e7129bc15786c47db97d1de90a11a73f0582b8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181327637093645,Labels:map[string]string{io.kubernete
s.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-738184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc195369d4468cfffebee038dc12bf0e,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eeee61fb-75f6-4589-b5b1-4c23d0764abf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e7c80efcec351       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   0c52e1c863ab7       storage-provisioner
	0be2dc67deee2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   d06c9b928557d       busybox
	92644b17d028a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   59bd5ed033be9       coredns-7db6d8ff4d-9w26c
	4d44ae996265f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   6136ee902a2ec       coredns-7db6d8ff4d-js7sn
	6945ab02cbf2a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      21 minutes ago      Running             kube-proxy                1                   e32a832bea1db       kube-proxy-c4n94
	abd3156233dd7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   0c52e1c863ab7       storage-provisioner
	e6b826ba73561       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      21 minutes ago      Running             kube-controller-manager   1                   428c5ac8a796a       kube-controller-manager-default-k8s-diff-port-738184
	3d43ec5825cbc       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      21 minutes ago      Running             kube-apiserver            1                   8f45848e9ebee       kube-apiserver-default-k8s-diff-port-738184
	1a749b1143a7a       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      21 minutes ago      Running             kube-scheduler            1                   a4cd441196dd3       kube-scheduler-default-k8s-diff-port-738184
	5430044adf294       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   81b79012396cb       etcd-default-k8s-diff-port-738184
	
	
	==> coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39032 - 9353 "HINFO IN 4281169462580780465.3513493968747018561. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007734339s
	
	
	==> coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53035 - 43803 "HINFO IN 5403295143789589699.7859562178537526355. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009121686s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-738184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-738184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=default-k8s-diff-port-738184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_48_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:48:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-738184
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:17:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:16:26 +0000   Wed, 17 Jul 2024 01:47:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:16:26 +0000   Wed, 17 Jul 2024 01:47:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:16:26 +0000   Wed, 17 Jul 2024 01:47:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:16:26 +0000   Wed, 17 Jul 2024 01:55:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    default-k8s-diff-port-738184
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55e69da725794cb286fe7c1138b473a3
	  System UUID:                55e69da7-2579-4cb2-86fe-7c1138b473a3
	  Boot ID:                    2a8dc260-2c7c-4ff1-bdbd-266033bdf9b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 coredns-7db6d8ff4d-9w26c                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 coredns-7db6d8ff4d-js7sn                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-738184                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-738184             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-738184    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-c4n94                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-738184             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-gcjkt                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-738184 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-738184 event: Registered Node default-k8s-diff-port-738184 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 22m)  kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 22m)  kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 22m)  kubelet          Node default-k8s-diff-port-738184 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-738184 event: Registered Node default-k8s-diff-port-738184 in Controller
	
	
	==> dmesg <==
	[Jul17 01:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051153] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039974] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.514788] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.344055] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.571382] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.378452] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.061314] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063869] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.185618] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +0.148120] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[  +0.323272] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +4.440004] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.058020] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.021000] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[  +4.580152] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.841397] systemd-fstab-generator[1586]: Ignoring "noauto" option for root device
	[  +2.886818] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.888100] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] <==
	{"level":"warn","ts":"2024-07-17T01:56:26.716347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"695.396771ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8305641285244617179 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" mod_revision:574 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" value_size:738 lease:8305641285244616606 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T01:56:26.716501Z","caller":"traceutil/trace.go:171","msg":"trace[2086914611] linearizableReadLoop","detail":"{readStateIndex:651; appliedIndex:650; }","duration":"1.00942398s","start":"2024-07-17T01:56:25.707065Z","end":"2024-07-17T01:56:26.716489Z","steps":["trace[2086914611] 'read index received'  (duration: 313.824381ms)","trace[2086914611] 'applied index is now lower than readState.Index'  (duration: 695.598513ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T01:56:26.716556Z","caller":"traceutil/trace.go:171","msg":"trace[1610276483] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"1.010031762s","start":"2024-07-17T01:56:25.706518Z","end":"2024-07-17T01:56:26.71655Z","steps":["trace[1610276483] 'process raft request'  (duration: 314.360949ms)","trace[1610276483] 'compare'  (duration: 695.214667ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T01:56:26.716602Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:25.706497Z","time spent":"1.01007453s","remote":"127.0.0.1:51456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":833,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" mod_revision:574 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" value_size:738 lease:8305641285244616606 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-gcjkt.17e2dd4ec9fc3e86\" > >"}
	{"level":"warn","ts":"2024-07-17T01:56:26.718005Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.010922243s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-738184\" ","response":"range_response_count:1 size:5802"}
	{"level":"info","ts":"2024-07-17T01:56:26.718109Z","caller":"traceutil/trace.go:171","msg":"trace[1211094792] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-738184; range_end:; response_count:1; response_revision:606; }","duration":"1.011060221s","start":"2024-07-17T01:56:25.70704Z","end":"2024-07-17T01:56:26.7181Z","steps":["trace[1211094792] 'agreement among raft nodes before linearized reading'  (duration: 1.009694735s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:56:26.718449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.254554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T01:56:26.71857Z","caller":"traceutil/trace.go:171","msg":"trace[1313638685] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:607; }","duration":"308.399448ms","start":"2024-07-17T01:56:26.410161Z","end":"2024-07-17T01:56:26.71856Z","steps":["trace[1313638685] 'agreement among raft nodes before linearized reading'  (duration: 308.174688ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:56:26.71866Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:26.410134Z","time spent":"308.516568ms","remote":"127.0.0.1:51396","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-17T01:56:26.718853Z","caller":"traceutil/trace.go:171","msg":"trace[707378877] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"1.007635692s","start":"2024-07-17T01:56:25.71121Z","end":"2024-07-17T01:56:26.718846Z","steps":["trace[707378877] 'process raft request'  (duration: 1.005585335s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T01:56:26.71894Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:25.711198Z","time spent":"1.007708404s","remote":"127.0.0.1:51546","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4278,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt\" mod_revision:594 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt\" value_size:4212 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-gcjkt\" > >"}
	{"level":"warn","ts":"2024-07-17T01:56:26.72003Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T01:56:25.707028Z","time spent":"1.012988699s","remote":"127.0.0.1:51532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5824,"request content":"key:\"/registry/minions/default-k8s-diff-port-738184\" "}
	{"level":"info","ts":"2024-07-17T02:05:29.394435Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":815}
	{"level":"info","ts":"2024-07-17T02:05:29.404844Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":815,"took":"10.047148ms","hash":3591137703,"current-db-size-bytes":2297856,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2297856,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-17T02:05:29.404937Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3591137703,"revision":815,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T02:10:29.401766Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1057}
	{"level":"info","ts":"2024-07-17T02:10:29.406207Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1057,"took":"3.963032ms","hash":307683839,"current-db-size-bytes":2297856,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1216512,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2024-07-17T02:10:29.406283Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":307683839,"revision":1057,"compact-revision":815}
	{"level":"info","ts":"2024-07-17T02:15:29.410819Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1301}
	{"level":"info","ts":"2024-07-17T02:15:29.416411Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1301,"took":"4.385909ms","hash":1485998052,"current-db-size-bytes":2297856,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1200128,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2024-07-17T02:15:29.416591Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1485998052,"revision":1301,"compact-revision":1057}
	{"level":"warn","ts":"2024-07-17T02:16:07.35878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.112793ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8305641285244623573 > lease_revoke:<id:734390be66d31a86>","response":"size:27"}
	{"level":"info","ts":"2024-07-17T02:16:07.359145Z","caller":"traceutil/trace.go:171","msg":"trace[292046950] linearizableReadLoop","detail":"{readStateIndex:1865; appliedIndex:1864; }","duration":"142.921999ms","start":"2024-07-17T02:16:07.216186Z","end":"2024-07-17T02:16:07.359108Z","steps":["trace[292046950] 'read index received'  (duration: 16.341516ms)","trace[292046950] 'applied index is now lower than readState.Index'  (duration: 126.579466ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T02:16:07.359307Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.085064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2024-07-17T02:16:07.359452Z","caller":"traceutil/trace.go:171","msg":"trace[1697031941] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1575; }","duration":"143.277159ms","start":"2024-07-17T02:16:07.216163Z","end":"2024-07-17T02:16:07.35944Z","steps":["trace[1697031941] 'agreement among raft nodes before linearized reading'  (duration: 143.070725ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:17:26 up 22 min,  0 users,  load average: 0.56, 0.33, 0.17
	Linux default-k8s-diff-port-738184 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] <==
	I0717 02:11:31.830226       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:13:31.828971       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:13:31.829052       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:13:31.829064       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:13:31.831433       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:13:31.831594       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:13:31.831627       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:15:30.833602       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:15:30.833711       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 02:15:31.833877       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:15:31.834024       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:15:31.834053       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:15:31.834114       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:15:31.834189       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:15:31.835340       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:16:31.834194       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:16:31.834353       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:16:31.834500       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:16:31.836696       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:16:31.836799       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:16:31.836974       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] <==
	I0717 02:12:11.000589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="217.076µs"
	E0717 02:12:14.129102       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:12:14.612807       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:12:44.134236       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:12:44.620796       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:13:14.140990       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:13:14.629245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:13:44.147126       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:13:44.636357       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:14:14.154167       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:14:14.644301       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:14:44.159906       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:14:44.651797       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:15:14.165587       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:15:14.660226       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:15:44.172886       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:15:44.667652       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:16:14.178086       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:16:14.675359       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:16:44.185484       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:16:44.682309       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:17:09.005683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="536.214µs"
	E0717 02:17:14.190924       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:17:14.690140       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:17:19.997498       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="864.667µs"
	
	
	==> kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] <==
	I0717 01:55:31.538476       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:55:31.548069       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.170"]
	I0717 01:55:31.583079       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:55:31.583110       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:55:31.583124       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:55:31.586434       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:55:31.586707       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:55:31.586882       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:55:31.588488       1 config.go:192] "Starting service config controller"
	I0717 01:55:31.588550       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:55:31.588604       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:55:31.588621       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:55:31.589974       1 config.go:319] "Starting node config controller"
	I0717 01:55:31.590018       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:55:31.689480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:55:31.689592       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:55:31.690146       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] <==
	I0717 01:55:28.549346       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:55:30.859934       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:55:30.860043       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:55:30.860057       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:55:30.860063       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:55:30.881177       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:55:30.881219       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:55:30.884126       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:55:30.884198       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:55:30.884807       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:55:30.885155       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:55:30.985541       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:15:02 default-k8s-diff-port-738184 kubelet[961]: E0717 02:15:02.983938     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:15:14 default-k8s-diff-port-738184 kubelet[961]: E0717 02:15:14.984312     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:15:25 default-k8s-diff-port-738184 kubelet[961]: E0717 02:15:25.984651     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:15:27 default-k8s-diff-port-738184 kubelet[961]: E0717 02:15:27.003334     961 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:15:27 default-k8s-diff-port-738184 kubelet[961]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:15:27 default-k8s-diff-port-738184 kubelet[961]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:15:27 default-k8s-diff-port-738184 kubelet[961]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:15:27 default-k8s-diff-port-738184 kubelet[961]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:15:38 default-k8s-diff-port-738184 kubelet[961]: E0717 02:15:38.983523     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:15:49 default-k8s-diff-port-738184 kubelet[961]: E0717 02:15:49.984068     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:16:01 default-k8s-diff-port-738184 kubelet[961]: E0717 02:16:01.983164     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:16:14 default-k8s-diff-port-738184 kubelet[961]: E0717 02:16:14.982978     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:16:27 default-k8s-diff-port-738184 kubelet[961]: E0717 02:16:27.000623     961 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:16:27 default-k8s-diff-port-738184 kubelet[961]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:16:27 default-k8s-diff-port-738184 kubelet[961]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:16:27 default-k8s-diff-port-738184 kubelet[961]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:16:27 default-k8s-diff-port-738184 kubelet[961]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:16:27 default-k8s-diff-port-738184 kubelet[961]: E0717 02:16:27.983420     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:16:39 default-k8s-diff-port-738184 kubelet[961]: E0717 02:16:39.983603     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:16:53 default-k8s-diff-port-738184 kubelet[961]: E0717 02:16:53.997581     961 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 02:16:53 default-k8s-diff-port-738184 kubelet[961]: E0717 02:16:53.997663     961 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 02:16:53 default-k8s-diff-port-738184 kubelet[961]: E0717 02:16:53.997870     961 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94z56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-gcjkt_kube-system(1859140e-a901-43c2-8c04-b4f8eb63e774): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 02:16:53 default-k8s-diff-port-738184 kubelet[961]: E0717 02:16:53.997905     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:17:08 default-k8s-diff-port-738184 kubelet[961]: E0717 02:17:08.984079     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	Jul 17 02:17:19 default-k8s-diff-port-738184 kubelet[961]: E0717 02:17:19.983092     961 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gcjkt" podUID="1859140e-a901-43c2-8c04-b4f8eb63e774"
	
	
	==> storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] <==
	I0717 01:55:31.447581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 01:56:01.452583       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] <==
	I0717 01:56:02.369509       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:56:02.384867       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:56:02.385109       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:56:02.400873       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:56:02.401823       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b79946a-8182-4a23-9abd-d389f8d21444", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-738184_59d650a1-827b-4c8b-a09e-040700e3c482 became leader
	I0717 01:56:02.402182       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-738184_59d650a1-827b-4c8b-a09e-040700e3c482!
	I0717 01:56:02.504549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-738184_59d650a1-827b-4c8b-a09e-040700e3c482!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-738184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-gcjkt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-738184 describe pod metrics-server-569cc877fc-gcjkt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-738184 describe pod metrics-server-569cc877fc-gcjkt: exit status 1 (60.896182ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-gcjkt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-738184 describe pod metrics-server-569cc877fc-gcjkt: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (507.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (395.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-940222 -n embed-certs-940222
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 02:16:37.335995961 +0000 UTC m=+6896.383877073
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-940222 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-940222 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.802µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-940222 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-940222 -n embed-certs-940222
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-940222 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-940222 logs -n 25: (1.195196752s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-255698 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | disable-driver-mounts-255698                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:48 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-940222            | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-738184  | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-391501             | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-940222                 | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-901761        | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 02:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-738184       | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-391501                  | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:59 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-391501 --memory=2200                     | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 02:02 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901761             | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 02:15 UTC | 17 Jul 24 02:15 UTC |
	| start   | -p newest-cni-386113 --memory=2200 --alsologtostderr   | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:15 UTC | 17 Jul 24 02:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 02:15 UTC | 17 Jul 24 02:15 UTC |
	| addons  | enable metrics-server -p newest-cni-386113             | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:16 UTC | 17 Jul 24 02:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-386113                                   | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:16 UTC | 17 Jul 24 02:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-386113                  | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:16 UTC | 17 Jul 24 02:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-386113 --memory=2200 --alsologtostderr   | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:16 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 02:16:33
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 02:16:33.664357   78861 out.go:291] Setting OutFile to fd 1 ...
	I0717 02:16:33.664449   78861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:16:33.664456   78861 out.go:304] Setting ErrFile to fd 2...
	I0717 02:16:33.664460   78861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:16:33.664627   78861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 02:16:33.665135   78861 out.go:298] Setting JSON to false
	I0717 02:16:33.665986   78861 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7136,"bootTime":1721175458,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 02:16:33.666038   78861 start.go:139] virtualization: kvm guest
	I0717 02:16:33.668138   78861 out.go:177] * [newest-cni-386113] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 02:16:33.669586   78861 notify.go:220] Checking for updates...
	I0717 02:16:33.669608   78861 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 02:16:33.671025   78861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 02:16:33.672727   78861 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 02:16:33.674166   78861 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 02:16:33.675622   78861 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 02:16:33.677043   78861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 02:16:33.678758   78861 config.go:182] Loaded profile config "newest-cni-386113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:16:33.679232   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:33.679275   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:33.694847   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33475
	I0717 02:16:33.695238   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:33.695845   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:33.695867   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:33.696161   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:33.696356   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:33.696601   78861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 02:16:33.696880   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:33.696919   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:33.711749   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0717 02:16:33.712173   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:33.712717   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:33.712735   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:33.713205   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:33.713446   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:33.749065   78861 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 02:16:33.750444   78861 start.go:297] selected driver: kvm2
	I0717 02:16:33.750456   78861 start.go:901] validating driver "kvm2" against &{Name:newest-cni-386113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-386113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:16:33.750577   78861 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 02:16:33.751254   78861 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 02:16:33.751314   78861 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 02:16:33.766259   78861 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 02:16:33.766639   78861 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 02:16:33.766666   78861 cni.go:84] Creating CNI manager for ""
	I0717 02:16:33.766673   78861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 02:16:33.766710   78861 start.go:340] cluster config:
	{Name:newest-cni-386113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-386113 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:16:33.766806   78861 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 02:16:33.768749   78861 out.go:177] * Starting "newest-cni-386113" primary control-plane node in "newest-cni-386113" cluster
	I0717 02:16:33.769983   78861 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 02:16:33.770010   78861 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 02:16:33.770017   78861 cache.go:56] Caching tarball of preloaded images
	I0717 02:16:33.770097   78861 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 02:16:33.770111   78861 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 02:16:33.770204   78861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/newest-cni-386113/config.json ...
	I0717 02:16:33.770367   78861 start.go:360] acquireMachinesLock for newest-cni-386113: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 02:16:33.770407   78861 start.go:364] duration metric: took 22.027µs to acquireMachinesLock for "newest-cni-386113"
	I0717 02:16:33.770425   78861 start.go:96] Skipping create...Using existing machine configuration
	I0717 02:16:33.770433   78861 fix.go:54] fixHost starting: 
	I0717 02:16:33.770726   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:33.770771   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:33.787241   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I0717 02:16:33.787726   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:33.788321   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:33.788341   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:33.788689   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:33.788891   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:33.789067   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetState
	I0717 02:16:33.790614   78861 fix.go:112] recreateIfNeeded on newest-cni-386113: state=Stopped err=<nil>
	I0717 02:16:33.790649   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	W0717 02:16:33.790810   78861 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 02:16:33.793055   78861 out.go:177] * Restarting existing kvm2 VM for "newest-cni-386113" ...
	
	
	==> CRI-O <==
	Jul 17 02:16:37 embed-certs-940222 crio[720]: time="2024-07-17 02:16:37.971769828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182597971742667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e124841-330d-4835-a12b-a243c34c7ccc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:16:37 embed-certs-940222 crio[720]: time="2024-07-17 02:16:37.973125909Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3924dd03-8452-4c4d-bec9-87eabe3eef33 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:37 embed-certs-940222 crio[720]: time="2024-07-17 02:16:37.973199125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3924dd03-8452-4c4d-bec9-87eabe3eef33 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:37 embed-certs-940222 crio[720]: time="2024-07-17 02:16:37.973408293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181421515470942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f2fca5b2ae3ae1df486675a0c92025c8f5bbd363b4b542dfdd983c23ed1e6,PodSandboxId:0ff7965ff724480306e7cc70851e146f45e92bfc0b3947d114b46a351625aca4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181401994677797,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44f768a2-54fc-4549-a808-df47ce510fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a755044,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783,PodSandboxId:00ef9d9de4935f8acfc77b0ab351002c788e80e38e771bcff098b89701e7af25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181398357407476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wcw97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dd50538-f54d-43f1-bd8a-b9d3131c13f7,},Annotations:map[string]string{io.kubernetes.container.hash: 498ae3bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de,PodSandboxId:82f16ee888ec1043cefb38b9d7dba8f6bebad894edbeff6633949f7347d576e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721181390656105332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l58xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feae4e89-4900-4399-b
d06-7d179280667d,},Annotations:map[string]string{io.kubernetes.container.hash: 25e04d7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181390629751214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632
252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745,PodSandboxId:2be3a62518d5db650a31c3c74384f6ad931d2d976853b9d4105e8b75fae7e552,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181387004809748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e63e752d96a0d9c33d7fe914b821e640,},Annota
tions:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787,PodSandboxId:cef3e414381ec6445255c6921dd61d89d4d21591fcb63bc40b5d2d1cf0943fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181386994139574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09041104cff8f86494c416db9d9c095a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e6fa5874,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060,PodSandboxId:ee225f609c7b0c28ffa1c5964b757e4195da42bac112e280589ba49de8834321,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181386925376800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae0f3175f741e0f48ddc242abc89638f,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509,PodSandboxId:3fdeb024796c574088a93f07258e8555fc75747b27851cf483e87e5c7798d920,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181386917321866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b052781dfa44cef0464609720ccead54,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 62b42a8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3924dd03-8452-4c4d-bec9-87eabe3eef33 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.023065463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2d7b931-d76d-4a0f-b9bc-ae9363700336 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.023265778Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2d7b931-d76d-4a0f-b9bc-ae9363700336 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.026004666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e717fee-ad9a-4281-ba7b-228f4947d313 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.026474334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182598026449110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e717fee-ad9a-4281-ba7b-228f4947d313 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.027251001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80a4e2cf-0cb5-4767-9158-1950bd4f38b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.027344164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80a4e2cf-0cb5-4767-9158-1950bd4f38b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.027539692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181421515470942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f2fca5b2ae3ae1df486675a0c92025c8f5bbd363b4b542dfdd983c23ed1e6,PodSandboxId:0ff7965ff724480306e7cc70851e146f45e92bfc0b3947d114b46a351625aca4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181401994677797,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44f768a2-54fc-4549-a808-df47ce510fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a755044,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783,PodSandboxId:00ef9d9de4935f8acfc77b0ab351002c788e80e38e771bcff098b89701e7af25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181398357407476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wcw97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dd50538-f54d-43f1-bd8a-b9d3131c13f7,},Annotations:map[string]string{io.kubernetes.container.hash: 498ae3bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de,PodSandboxId:82f16ee888ec1043cefb38b9d7dba8f6bebad894edbeff6633949f7347d576e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721181390656105332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l58xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feae4e89-4900-4399-b
d06-7d179280667d,},Annotations:map[string]string{io.kubernetes.container.hash: 25e04d7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181390629751214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632
252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745,PodSandboxId:2be3a62518d5db650a31c3c74384f6ad931d2d976853b9d4105e8b75fae7e552,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181387004809748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e63e752d96a0d9c33d7fe914b821e640,},Annota
tions:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787,PodSandboxId:cef3e414381ec6445255c6921dd61d89d4d21591fcb63bc40b5d2d1cf0943fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181386994139574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09041104cff8f86494c416db9d9c095a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e6fa5874,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060,PodSandboxId:ee225f609c7b0c28ffa1c5964b757e4195da42bac112e280589ba49de8834321,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181386925376800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae0f3175f741e0f48ddc242abc89638f,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509,PodSandboxId:3fdeb024796c574088a93f07258e8555fc75747b27851cf483e87e5c7798d920,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181386917321866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b052781dfa44cef0464609720ccead54,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 62b42a8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80a4e2cf-0cb5-4767-9158-1950bd4f38b8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.070257344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e649ebd5-6aa9-4d94-872a-fcb7102c5785 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.070405763Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e649ebd5-6aa9-4d94-872a-fcb7102c5785 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.071692436Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0b29406-535e-4d81-b7bc-f5bfa325460a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.072506136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182598072468577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0b29406-535e-4d81-b7bc-f5bfa325460a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.073237823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85040129-9aec-4a01-9abd-8a62b5a84b42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.073289086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85040129-9aec-4a01-9abd-8a62b5a84b42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.073524252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181421515470942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f2fca5b2ae3ae1df486675a0c92025c8f5bbd363b4b542dfdd983c23ed1e6,PodSandboxId:0ff7965ff724480306e7cc70851e146f45e92bfc0b3947d114b46a351625aca4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181401994677797,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44f768a2-54fc-4549-a808-df47ce510fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a755044,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783,PodSandboxId:00ef9d9de4935f8acfc77b0ab351002c788e80e38e771bcff098b89701e7af25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181398357407476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wcw97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dd50538-f54d-43f1-bd8a-b9d3131c13f7,},Annotations:map[string]string{io.kubernetes.container.hash: 498ae3bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de,PodSandboxId:82f16ee888ec1043cefb38b9d7dba8f6bebad894edbeff6633949f7347d576e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721181390656105332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l58xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feae4e89-4900-4399-b
d06-7d179280667d,},Annotations:map[string]string{io.kubernetes.container.hash: 25e04d7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181390629751214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632
252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745,PodSandboxId:2be3a62518d5db650a31c3c74384f6ad931d2d976853b9d4105e8b75fae7e552,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181387004809748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e63e752d96a0d9c33d7fe914b821e640,},Annota
tions:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787,PodSandboxId:cef3e414381ec6445255c6921dd61d89d4d21591fcb63bc40b5d2d1cf0943fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181386994139574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09041104cff8f86494c416db9d9c095a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e6fa5874,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060,PodSandboxId:ee225f609c7b0c28ffa1c5964b757e4195da42bac112e280589ba49de8834321,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181386925376800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae0f3175f741e0f48ddc242abc89638f,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509,PodSandboxId:3fdeb024796c574088a93f07258e8555fc75747b27851cf483e87e5c7798d920,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181386917321866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b052781dfa44cef0464609720ccead54,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 62b42a8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85040129-9aec-4a01-9abd-8a62b5a84b42 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.114510175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7130e96-b051-4d20-8d68-47687f12a5c0 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.114581323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7130e96-b051-4d20-8d68-47687f12a5c0 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.116078881Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41fd3231-ddfa-4ad2-a2b7-6bcb3000d9c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.116512291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182598116490107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41fd3231-ddfa-4ad2-a2b7-6bcb3000d9c3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.117127612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3ced0cc-fe27-49b2-a362-e2f96d578932 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.117180241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3ced0cc-fe27-49b2-a362-e2f96d578932 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:16:38 embed-certs-940222 crio[720]: time="2024-07-17 02:16:38.117388167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181421515470942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e47f2fca5b2ae3ae1df486675a0c92025c8f5bbd363b4b542dfdd983c23ed1e6,PodSandboxId:0ff7965ff724480306e7cc70851e146f45e92bfc0b3947d114b46a351625aca4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721181401994677797,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 44f768a2-54fc-4549-a808-df47ce510fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 8a755044,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783,PodSandboxId:00ef9d9de4935f8acfc77b0ab351002c788e80e38e771bcff098b89701e7af25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181398357407476,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wcw97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dd50538-f54d-43f1-bd8a-b9d3131c13f7,},Annotations:map[string]string{io.kubernetes.container.hash: 498ae3bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de,PodSandboxId:82f16ee888ec1043cefb38b9d7dba8f6bebad894edbeff6633949f7347d576e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721181390656105332,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l58xk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: feae4e89-4900-4399-b
d06-7d179280667d,},Annotations:map[string]string{io.kubernetes.container.hash: 25e04d7c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20,PodSandboxId:230f003f3ea34cc1e41f0ed90dd443ced65b55f24d944222c891d7b7adde9c8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721181390629751214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35aab5a5-6e1b-4572-aabe-a73fb1632
252,},Annotations:map[string]string{io.kubernetes.container.hash: 8cd526da,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745,PodSandboxId:2be3a62518d5db650a31c3c74384f6ad931d2d976853b9d4105e8b75fae7e552,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721181387004809748,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e63e752d96a0d9c33d7fe914b821e640,},Annota
tions:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787,PodSandboxId:cef3e414381ec6445255c6921dd61d89d4d21591fcb63bc40b5d2d1cf0943fed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721181386994139574,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09041104cff8f86494c416db9d9c095a,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e6fa5874,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060,PodSandboxId:ee225f609c7b0c28ffa1c5964b757e4195da42bac112e280589ba49de8834321,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721181386925376800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae0f3175f741e0f48ddc242abc89638f,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509,PodSandboxId:3fdeb024796c574088a93f07258e8555fc75747b27851cf483e87e5c7798d920,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721181386917321866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-940222,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b052781dfa44cef0464609720ccead54,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 62b42a8e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3ced0cc-fe27-49b2-a362-e2f96d578932 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fac56f23fdf5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   230f003f3ea34       storage-provisioner
	e47f2fca5b2ae       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   0ff7965ff7244       busybox
	110368a2f3e57       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   00ef9d9de4935       coredns-7db6d8ff4d-wcw97
	0012a63297ec6       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      20 minutes ago      Running             kube-proxy                1                   82f16ee888ec1       kube-proxy-l58xk
	51a6cb79762ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   230f003f3ea34       storage-provisioner
	211063fd97af0       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      20 minutes ago      Running             kube-scheduler            1                   2be3a62518d5d       kube-scheduler-embed-certs-940222
	b1af0adb58a0b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   cef3e414381ec       etcd-embed-certs-940222
	5e124648f9a37       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      20 minutes ago      Running             kube-controller-manager   1                   ee225f609c7b0       kube-controller-manager-embed-certs-940222
	ffa398702fb31       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      20 minutes ago      Running             kube-apiserver            1                   3fdeb024796c5       kube-apiserver-embed-certs-940222
	
	
	==> coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50006 - 17386 "HINFO IN 3415240562246251088.4894184447526837990. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010682562s
	
	
	==> describe nodes <==
	Name:               embed-certs-940222
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-940222
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=embed-certs-940222
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T01_47_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 01:47:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-940222
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:16:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:12:18 +0000   Wed, 17 Jul 2024 01:47:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:12:18 +0000   Wed, 17 Jul 2024 01:47:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:12:18 +0000   Wed, 17 Jul 2024 01:47:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:12:18 +0000   Wed, 17 Jul 2024 01:56:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.225
	  Hostname:    embed-certs-940222
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a278df33fdef4860a3e7518e7f996e0f
	  System UUID:                a278df33-fdef-4860-a3e7-518e7f996e0f
	  Boot ID:                    87f69d3f-fb13-496f-b419-ce5b68d79a00
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-wcw97                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-940222                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-940222             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-940222    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-l58xk                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-940222             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-rhp7b               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-940222 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-940222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-940222 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-940222 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-940222 event: Registered Node embed-certs-940222 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-940222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-940222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-940222 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-940222 event: Registered Node embed-certs-940222 in Controller
	
	
	==> dmesg <==
	[Jul17 01:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063983] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.054912] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.812591] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.502206] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.585517] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.398692] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.061606] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078084] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.165398] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.149231] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.278520] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.425711] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +0.063933] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.142913] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +4.562234] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.545851] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +4.196995] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.467916] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] <==
	{"level":"info","ts":"2024-07-17T01:56:27.451925Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.225:2380"}
	{"level":"info","ts":"2024-07-17T01:56:27.452042Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.225:2380"}
	{"level":"info","ts":"2024-07-17T01:56:28.504499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T01:56:28.504619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T01:56:28.504695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b received MsgPreVoteResp from 7978524bf3afee6b at term 2"}
	{"level":"info","ts":"2024-07-17T01:56:28.504744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T01:56:28.504769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b received MsgVoteResp from 7978524bf3afee6b at term 3"}
	{"level":"info","ts":"2024-07-17T01:56:28.504796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7978524bf3afee6b became leader at term 3"}
	{"level":"info","ts":"2024-07-17T01:56:28.504822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7978524bf3afee6b elected leader 7978524bf3afee6b at term 3"}
	{"level":"info","ts":"2024-07-17T01:56:28.506489Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7978524bf3afee6b","local-member-attributes":"{Name:embed-certs-940222 ClientURLs:[https://192.168.72.225:2379]}","request-path":"/0/members/7978524bf3afee6b/attributes","cluster-id":"471aba4d800e3f5d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T01:56:28.506724Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:56:28.509507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T01:56:28.531332Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T01:56:28.531652Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T01:56:28.531686Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T01:56:28.533154Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.225:2379"}
	{"level":"info","ts":"2024-07-17T02:06:28.546275Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":798}
	{"level":"info","ts":"2024-07-17T02:06:28.556153Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":798,"took":"9.520869ms","hash":148558858,"current-db-size-bytes":2138112,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2138112,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-17T02:06:28.556224Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":148558858,"revision":798,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T02:11:28.553105Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1041}
	{"level":"info","ts":"2024-07-17T02:11:28.562322Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1041,"took":"8.887035ms","hash":2685213272,"current-db-size-bytes":2138112,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1171456,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2024-07-17T02:11:28.562379Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2685213272,"revision":1041,"compact-revision":798}
	{"level":"info","ts":"2024-07-17T02:16:28.564435Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1285}
	{"level":"info","ts":"2024-07-17T02:16:28.567476Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1285,"took":"2.750166ms","hash":1114468341,"current-db-size-bytes":2138112,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1122304,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2024-07-17T02:16:28.567522Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1114468341,"revision":1285,"compact-revision":1041}
	
	
	==> kernel <==
	 02:16:38 up 20 min,  0 users,  load average: 0.22, 0.19, 0.16
	Linux embed-certs-940222 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] <==
	I0717 02:11:30.928431       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:12:30.928107       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:12:30.928297       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:12:30.928343       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:12:30.929220       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:12:30.929295       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:12:30.930448       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:14:30.929270       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:14:30.929350       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:14:30.929366       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:14:30.930553       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:14:30.930627       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:14:30.930634       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:16:29.930590       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:16:29.930714       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 02:16:30.931944       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:16:30.931998       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 02:16:30.932007       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:16:30.932062       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 02:16:30.932184       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:16:30.933470       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] <==
	I0717 02:10:43.396786       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:11:12.913258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:11:13.404169       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:11:42.919145       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:11:43.413768       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:12:12.924451       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:12:13.421647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:12:42.929163       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:12:43.429964       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:12:46.315831       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="254.211µs"
	I0717 02:12:58.315727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="55.231µs"
	E0717 02:13:12.935354       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:13:13.440511       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:13:42.942345       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:13:43.449589       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:14:12.950134       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:14:13.457729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:14:42.955003       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:14:43.464746       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:15:12.960730       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:15:13.475980       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:15:42.967090       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:15:43.483770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:16:12.972338       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 02:16:13.495835       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] <==
	I0717 01:56:30.875744       1 server_linux.go:69] "Using iptables proxy"
	I0717 01:56:30.907135       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.225"]
	I0717 01:56:30.977148       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 01:56:30.977274       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 01:56:30.977348       1 server_linux.go:165] "Using iptables Proxier"
	I0717 01:56:30.980199       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 01:56:30.980523       1 server.go:872] "Version info" version="v1.30.2"
	I0717 01:56:30.980624       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:56:30.982386       1 config.go:192] "Starting service config controller"
	I0717 01:56:30.982496       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 01:56:30.982547       1 config.go:101] "Starting endpoint slice config controller"
	I0717 01:56:30.982565       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 01:56:30.984635       1 config.go:319] "Starting node config controller"
	I0717 01:56:30.984667       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 01:56:31.082664       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 01:56:31.082743       1 shared_informer.go:320] Caches are synced for service config
	I0717 01:56:31.084839       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] <==
	I0717 01:56:27.681269       1 serving.go:380] Generated self-signed cert in-memory
	W0717 01:56:29.886758       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 01:56:29.887055       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 01:56:29.887089       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 01:56:29.887158       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 01:56:29.927020       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 01:56:29.927128       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 01:56:29.932103       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 01:56:29.932217       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 01:56:29.932241       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 01:56:29.932256       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 01:56:30.032393       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:14:26 embed-certs-940222 kubelet[933]: E0717 02:14:26.323510     933 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:14:26 embed-certs-940222 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:14:26 embed-certs-940222 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:14:26 embed-certs-940222 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:14:26 embed-certs-940222 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:14:32 embed-certs-940222 kubelet[933]: E0717 02:14:32.299614     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:14:43 embed-certs-940222 kubelet[933]: E0717 02:14:43.298366     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:14:57 embed-certs-940222 kubelet[933]: E0717 02:14:57.299099     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:15:11 embed-certs-940222 kubelet[933]: E0717 02:15:11.299213     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:15:23 embed-certs-940222 kubelet[933]: E0717 02:15:23.299245     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:15:26 embed-certs-940222 kubelet[933]: E0717 02:15:26.326963     933 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:15:26 embed-certs-940222 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:15:26 embed-certs-940222 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:15:26 embed-certs-940222 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:15:26 embed-certs-940222 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:15:38 embed-certs-940222 kubelet[933]: E0717 02:15:38.301735     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:15:52 embed-certs-940222 kubelet[933]: E0717 02:15:52.298350     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:16:04 embed-certs-940222 kubelet[933]: E0717 02:16:04.298470     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:16:17 embed-certs-940222 kubelet[933]: E0717 02:16:17.298796     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	Jul 17 02:16:26 embed-certs-940222 kubelet[933]: E0717 02:16:26.327159     933 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:16:26 embed-certs-940222 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:16:26 embed-certs-940222 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:16:26 embed-certs-940222 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:16:26 embed-certs-940222 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:16:31 embed-certs-940222 kubelet[933]: E0717 02:16:31.298552     933 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rhp7b" podUID="07ffb1fa-240e-4c40-9ce4-93a1b51e179b"
	
	
	==> storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] <==
	I0717 01:56:30.769370       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 01:57:00.773605       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] <==
	I0717 01:57:01.617089       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 01:57:01.625960       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 01:57:01.626056       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 01:57:01.640791       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 01:57:01.640991       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-940222_c3536443-ec29-447d-af1d-f6bbbbd45845!
	I0717 01:57:01.643810       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f05084ec-ac5f-4bf7-b888-599003faf3d0", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-940222_c3536443-ec29-447d-af1d-f6bbbbd45845 became leader
	I0717 01:57:01.742297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-940222_c3536443-ec29-447d-af1d-f6bbbbd45845!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-940222 -n embed-certs-940222
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-940222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-rhp7b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-940222 describe pod metrics-server-569cc877fc-rhp7b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-940222 describe pod metrics-server-569cc877fc-rhp7b: exit status 1 (60.529429ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-rhp7b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-940222 describe pod metrics-server-569cc877fc-rhp7b: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (395.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (263.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-391501 -n no-preload-391501
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 02:15:41.027018287 +0000 UTC m=+6840.074899401
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-391501 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-391501 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.338µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-391501 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-391501 -n no-preload-391501
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-391501 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-391501 logs -n 25: (1.28044567s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo find                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo crio                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-894370                                       | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255698 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | disable-driver-mounts-255698                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:48 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-940222            | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-738184  | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-391501             | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-940222                 | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-901761        | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 02:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-738184       | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-391501                  | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:59 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-391501 --memory=2200                     | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 02:02 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901761             | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 02:15 UTC | 17 Jul 24 02:15 UTC |
	| start   | -p newest-cni-386113 --memory=2200 --alsologtostderr   | newest-cni-386113            | jenkins | v1.33.1 | 17 Jul 24 02:15 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 02:15:36
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 02:15:36.309210   78137 out.go:291] Setting OutFile to fd 1 ...
	I0717 02:15:36.309323   78137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:15:36.309330   78137 out.go:304] Setting ErrFile to fd 2...
	I0717 02:15:36.309336   78137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:15:36.309597   78137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 02:15:36.310242   78137 out.go:298] Setting JSON to false
	I0717 02:15:36.311376   78137 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7078,"bootTime":1721175458,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 02:15:36.311459   78137 start.go:139] virtualization: kvm guest
	I0717 02:15:36.314060   78137 out.go:177] * [newest-cni-386113] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 02:15:36.315986   78137 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 02:15:36.316004   78137 notify.go:220] Checking for updates...
	I0717 02:15:36.318421   78137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 02:15:36.319681   78137 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 02:15:36.320997   78137 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 02:15:36.322264   78137 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 02:15:36.323582   78137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 02:15:36.325358   78137 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 02:15:36.325460   78137 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 02:15:36.325562   78137 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:15:36.325655   78137 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 02:15:36.362510   78137 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 02:15:36.363788   78137 start.go:297] selected driver: kvm2
	I0717 02:15:36.363810   78137 start.go:901] validating driver "kvm2" against <nil>
	I0717 02:15:36.363822   78137 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 02:15:36.364477   78137 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 02:15:36.364551   78137 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 02:15:36.380159   78137 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 02:15:36.380219   78137 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0717 02:15:36.380250   78137 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0717 02:15:36.380457   78137 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 02:15:36.380482   78137 cni.go:84] Creating CNI manager for ""
	I0717 02:15:36.380489   78137 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 02:15:36.380495   78137 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 02:15:36.380557   78137 start.go:340] cluster config:
	{Name:newest-cni-386113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-386113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:15:36.380649   78137 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 02:15:36.382398   78137 out.go:177] * Starting "newest-cni-386113" primary control-plane node in "newest-cni-386113" cluster
	I0717 02:15:36.383829   78137 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 02:15:36.383875   78137 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 02:15:36.383886   78137 cache.go:56] Caching tarball of preloaded images
	I0717 02:15:36.384007   78137 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 02:15:36.384023   78137 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 02:15:36.384166   78137 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/newest-cni-386113/config.json ...
	I0717 02:15:36.384195   78137 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/newest-cni-386113/config.json: {Name:mk516b2c794d7d1b7559815c6310f774da27adc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:15:36.384348   78137 start.go:360] acquireMachinesLock for newest-cni-386113: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 02:15:36.384378   78137 start.go:364] duration metric: took 17.626µs to acquireMachinesLock for "newest-cni-386113"
	I0717 02:15:36.384395   78137 start.go:93] Provisioning new machine with config: &{Name:newest-cni-386113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-386113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 02:15:36.384466   78137 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 02:15:36.386288   78137 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 02:15:36.386441   78137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:15:36.386501   78137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:15:36.401964   78137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0717 02:15:36.402486   78137 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:15:36.403131   78137 main.go:141] libmachine: Using API Version  1
	I0717 02:15:36.403154   78137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:15:36.403495   78137 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:15:36.403740   78137 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:15:36.403879   78137 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:15:36.404034   78137 start.go:159] libmachine.API.Create for "newest-cni-386113" (driver="kvm2")
	I0717 02:15:36.404063   78137 client.go:168] LocalClient.Create starting
	I0717 02:15:36.404097   78137 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem
	I0717 02:15:36.404146   78137 main.go:141] libmachine: Decoding PEM data...
	I0717 02:15:36.404168   78137 main.go:141] libmachine: Parsing certificate...
	I0717 02:15:36.404232   78137 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem
	I0717 02:15:36.404260   78137 main.go:141] libmachine: Decoding PEM data...
	I0717 02:15:36.404278   78137 main.go:141] libmachine: Parsing certificate...
	I0717 02:15:36.404306   78137 main.go:141] libmachine: Running pre-create checks...
	I0717 02:15:36.404319   78137 main.go:141] libmachine: (newest-cni-386113) Calling .PreCreateCheck
	I0717 02:15:36.404681   78137 main.go:141] libmachine: (newest-cni-386113) Calling .GetConfigRaw
	I0717 02:15:36.405081   78137 main.go:141] libmachine: Creating machine...
	I0717 02:15:36.405096   78137 main.go:141] libmachine: (newest-cni-386113) Calling .Create
	I0717 02:15:36.405225   78137 main.go:141] libmachine: (newest-cni-386113) Creating KVM machine...
	I0717 02:15:36.406696   78137 main.go:141] libmachine: (newest-cni-386113) DBG | found existing default KVM network
	I0717 02:15:36.407807   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:36.407661   78159 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6d:c9:19} reservation:<nil>}
	I0717 02:15:36.408862   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:36.408752   78159 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000288970}
	I0717 02:15:36.408890   78137 main.go:141] libmachine: (newest-cni-386113) DBG | created network xml: 
	I0717 02:15:36.408902   78137 main.go:141] libmachine: (newest-cni-386113) DBG | <network>
	I0717 02:15:36.408910   78137 main.go:141] libmachine: (newest-cni-386113) DBG |   <name>mk-newest-cni-386113</name>
	I0717 02:15:36.408917   78137 main.go:141] libmachine: (newest-cni-386113) DBG |   <dns enable='no'/>
	I0717 02:15:36.408925   78137 main.go:141] libmachine: (newest-cni-386113) DBG |   
	I0717 02:15:36.408932   78137 main.go:141] libmachine: (newest-cni-386113) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0717 02:15:36.408944   78137 main.go:141] libmachine: (newest-cni-386113) DBG |     <dhcp>
	I0717 02:15:36.408953   78137 main.go:141] libmachine: (newest-cni-386113) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0717 02:15:36.408963   78137 main.go:141] libmachine: (newest-cni-386113) DBG |     </dhcp>
	I0717 02:15:36.408969   78137 main.go:141] libmachine: (newest-cni-386113) DBG |   </ip>
	I0717 02:15:36.408975   78137 main.go:141] libmachine: (newest-cni-386113) DBG |   
	I0717 02:15:36.409002   78137 main.go:141] libmachine: (newest-cni-386113) DBG | </network>
	I0717 02:15:36.409023   78137 main.go:141] libmachine: (newest-cni-386113) DBG | 
	I0717 02:15:36.414522   78137 main.go:141] libmachine: (newest-cni-386113) DBG | trying to create private KVM network mk-newest-cni-386113 192.168.50.0/24...
	I0717 02:15:36.486855   78137 main.go:141] libmachine: (newest-cni-386113) DBG | private KVM network mk-newest-cni-386113 192.168.50.0/24 created
	I0717 02:15:36.486877   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:36.486821   78159 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 02:15:36.486890   78137 main.go:141] libmachine: (newest-cni-386113) Setting up store path in /home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113 ...
	I0717 02:15:36.486913   78137 main.go:141] libmachine: (newest-cni-386113) Building disk image from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 02:15:36.487030   78137 main.go:141] libmachine: (newest-cni-386113) Downloading /home/jenkins/minikube-integration/19264-3908/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 02:15:36.720059   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:36.719902   78159 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/id_rsa...
	I0717 02:15:36.803102   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:36.802991   78159 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/newest-cni-386113.rawdisk...
	I0717 02:15:36.803130   78137 main.go:141] libmachine: (newest-cni-386113) DBG | Writing magic tar header
	I0717 02:15:36.803147   78137 main.go:141] libmachine: (newest-cni-386113) DBG | Writing SSH key tar header
	I0717 02:15:36.803160   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:36.803109   78159 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113 ...
	I0717 02:15:36.803204   78137 main.go:141] libmachine: (newest-cni-386113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113
	I0717 02:15:36.803266   78137 main.go:141] libmachine: (newest-cni-386113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113 (perms=drwx------)
	I0717 02:15:36.803287   78137 main.go:141] libmachine: (newest-cni-386113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube/machines
	I0717 02:15:36.803298   78137 main.go:141] libmachine: (newest-cni-386113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube/machines (perms=drwxr-xr-x)
	I0717 02:15:36.803317   78137 main.go:141] libmachine: (newest-cni-386113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908/.minikube (perms=drwxr-xr-x)
	I0717 02:15:36.803328   78137 main.go:141] libmachine: (newest-cni-386113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 02:15:36.803335   78137 main.go:141] libmachine: (newest-cni-386113) Setting executable bit set on /home/jenkins/minikube-integration/19264-3908 (perms=drwxrwxr-x)
	I0717 02:15:36.803345   78137 main.go:141] libmachine: (newest-cni-386113) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 02:15:36.803361   78137 main.go:141] libmachine: (newest-cni-386113) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 02:15:36.803371   78137 main.go:141] libmachine: (newest-cni-386113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19264-3908
	I0717 02:15:36.803376   78137 main.go:141] libmachine: (newest-cni-386113) Creating domain...
	I0717 02:15:36.803391   78137 main.go:141] libmachine: (newest-cni-386113) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 02:15:36.803400   78137 main.go:141] libmachine: (newest-cni-386113) DBG | Checking permissions on dir: /home/jenkins
	I0717 02:15:36.803413   78137 main.go:141] libmachine: (newest-cni-386113) DBG | Checking permissions on dir: /home
	I0717 02:15:36.803423   78137 main.go:141] libmachine: (newest-cni-386113) DBG | Skipping /home - not owner
	I0717 02:15:36.804508   78137 main.go:141] libmachine: (newest-cni-386113) define libvirt domain using xml: 
	I0717 02:15:36.804530   78137 main.go:141] libmachine: (newest-cni-386113) <domain type='kvm'>
	I0717 02:15:36.804543   78137 main.go:141] libmachine: (newest-cni-386113)   <name>newest-cni-386113</name>
	I0717 02:15:36.804551   78137 main.go:141] libmachine: (newest-cni-386113)   <memory unit='MiB'>2200</memory>
	I0717 02:15:36.804560   78137 main.go:141] libmachine: (newest-cni-386113)   <vcpu>2</vcpu>
	I0717 02:15:36.804570   78137 main.go:141] libmachine: (newest-cni-386113)   <features>
	I0717 02:15:36.804579   78137 main.go:141] libmachine: (newest-cni-386113)     <acpi/>
	I0717 02:15:36.804585   78137 main.go:141] libmachine: (newest-cni-386113)     <apic/>
	I0717 02:15:36.804607   78137 main.go:141] libmachine: (newest-cni-386113)     <pae/>
	I0717 02:15:36.804621   78137 main.go:141] libmachine: (newest-cni-386113)     
	I0717 02:15:36.804638   78137 main.go:141] libmachine: (newest-cni-386113)   </features>
	I0717 02:15:36.804648   78137 main.go:141] libmachine: (newest-cni-386113)   <cpu mode='host-passthrough'>
	I0717 02:15:36.804657   78137 main.go:141] libmachine: (newest-cni-386113)   
	I0717 02:15:36.804672   78137 main.go:141] libmachine: (newest-cni-386113)   </cpu>
	I0717 02:15:36.804683   78137 main.go:141] libmachine: (newest-cni-386113)   <os>
	I0717 02:15:36.804697   78137 main.go:141] libmachine: (newest-cni-386113)     <type>hvm</type>
	I0717 02:15:36.804710   78137 main.go:141] libmachine: (newest-cni-386113)     <boot dev='cdrom'/>
	I0717 02:15:36.804720   78137 main.go:141] libmachine: (newest-cni-386113)     <boot dev='hd'/>
	I0717 02:15:36.804732   78137 main.go:141] libmachine: (newest-cni-386113)     <bootmenu enable='no'/>
	I0717 02:15:36.804742   78137 main.go:141] libmachine: (newest-cni-386113)   </os>
	I0717 02:15:36.804750   78137 main.go:141] libmachine: (newest-cni-386113)   <devices>
	I0717 02:15:36.804760   78137 main.go:141] libmachine: (newest-cni-386113)     <disk type='file' device='cdrom'>
	I0717 02:15:36.804792   78137 main.go:141] libmachine: (newest-cni-386113)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/boot2docker.iso'/>
	I0717 02:15:36.804809   78137 main.go:141] libmachine: (newest-cni-386113)       <target dev='hdc' bus='scsi'/>
	I0717 02:15:36.804815   78137 main.go:141] libmachine: (newest-cni-386113)       <readonly/>
	I0717 02:15:36.804821   78137 main.go:141] libmachine: (newest-cni-386113)     </disk>
	I0717 02:15:36.804826   78137 main.go:141] libmachine: (newest-cni-386113)     <disk type='file' device='disk'>
	I0717 02:15:36.804834   78137 main.go:141] libmachine: (newest-cni-386113)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 02:15:36.804845   78137 main.go:141] libmachine: (newest-cni-386113)       <source file='/home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/newest-cni-386113.rawdisk'/>
	I0717 02:15:36.804855   78137 main.go:141] libmachine: (newest-cni-386113)       <target dev='hda' bus='virtio'/>
	I0717 02:15:36.804878   78137 main.go:141] libmachine: (newest-cni-386113)     </disk>
	I0717 02:15:36.804907   78137 main.go:141] libmachine: (newest-cni-386113)     <interface type='network'>
	I0717 02:15:36.804917   78137 main.go:141] libmachine: (newest-cni-386113)       <source network='mk-newest-cni-386113'/>
	I0717 02:15:36.804922   78137 main.go:141] libmachine: (newest-cni-386113)       <model type='virtio'/>
	I0717 02:15:36.804928   78137 main.go:141] libmachine: (newest-cni-386113)     </interface>
	I0717 02:15:36.804937   78137 main.go:141] libmachine: (newest-cni-386113)     <interface type='network'>
	I0717 02:15:36.804943   78137 main.go:141] libmachine: (newest-cni-386113)       <source network='default'/>
	I0717 02:15:36.804948   78137 main.go:141] libmachine: (newest-cni-386113)       <model type='virtio'/>
	I0717 02:15:36.804954   78137 main.go:141] libmachine: (newest-cni-386113)     </interface>
	I0717 02:15:36.804959   78137 main.go:141] libmachine: (newest-cni-386113)     <serial type='pty'>
	I0717 02:15:36.804964   78137 main.go:141] libmachine: (newest-cni-386113)       <target port='0'/>
	I0717 02:15:36.804970   78137 main.go:141] libmachine: (newest-cni-386113)     </serial>
	I0717 02:15:36.804976   78137 main.go:141] libmachine: (newest-cni-386113)     <console type='pty'>
	I0717 02:15:36.804986   78137 main.go:141] libmachine: (newest-cni-386113)       <target type='serial' port='0'/>
	I0717 02:15:36.805008   78137 main.go:141] libmachine: (newest-cni-386113)     </console>
	I0717 02:15:36.805015   78137 main.go:141] libmachine: (newest-cni-386113)     <rng model='virtio'>
	I0717 02:15:36.805021   78137 main.go:141] libmachine: (newest-cni-386113)       <backend model='random'>/dev/random</backend>
	I0717 02:15:36.805025   78137 main.go:141] libmachine: (newest-cni-386113)     </rng>
	I0717 02:15:36.805030   78137 main.go:141] libmachine: (newest-cni-386113)     
	I0717 02:15:36.805036   78137 main.go:141] libmachine: (newest-cni-386113)     
	I0717 02:15:36.805041   78137 main.go:141] libmachine: (newest-cni-386113)   </devices>
	I0717 02:15:36.805047   78137 main.go:141] libmachine: (newest-cni-386113) </domain>
	I0717 02:15:36.805054   78137 main.go:141] libmachine: (newest-cni-386113) 
	I0717 02:15:36.809740   78137 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:fb:3b:46 in network default
	I0717 02:15:36.810518   78137 main.go:141] libmachine: (newest-cni-386113) Ensuring networks are active...
	I0717 02:15:36.810546   78137 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:15:36.811178   78137 main.go:141] libmachine: (newest-cni-386113) Ensuring network default is active
	I0717 02:15:36.811494   78137 main.go:141] libmachine: (newest-cni-386113) Ensuring network mk-newest-cni-386113 is active
	I0717 02:15:36.811989   78137 main.go:141] libmachine: (newest-cni-386113) Getting domain xml...
	I0717 02:15:36.812768   78137 main.go:141] libmachine: (newest-cni-386113) Creating domain...
	I0717 02:15:38.052836   78137 main.go:141] libmachine: (newest-cni-386113) Waiting to get IP...
	I0717 02:15:38.053914   78137 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:15:38.054334   78137 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:15:38.054359   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:38.054308   78159 retry.go:31] will retry after 259.097072ms: waiting for machine to come up
	I0717 02:15:38.314830   78137 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:15:38.315420   78137 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:15:38.315449   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:38.315370   78159 retry.go:31] will retry after 339.912856ms: waiting for machine to come up
	I0717 02:15:38.656733   78137 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:15:38.657148   78137 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:15:38.657184   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:38.657117   78159 retry.go:31] will retry after 301.831172ms: waiting for machine to come up
	I0717 02:15:38.960550   78137 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:15:38.961005   78137 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:15:38.961030   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:38.960958   78159 retry.go:31] will retry after 494.495078ms: waiting for machine to come up
	I0717 02:15:39.457379   78137 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:15:39.457899   78137 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:15:39.457927   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:39.457832   78159 retry.go:31] will retry after 622.121572ms: waiting for machine to come up
	I0717 02:15:40.081017   78137 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:15:40.081443   78137 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:15:40.081467   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:40.081406   78159 retry.go:31] will retry after 884.184483ms: waiting for machine to come up
	I0717 02:15:40.966921   78137 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:15:40.967383   78137 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:15:40.967406   78137 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:15:40.967337   78159 retry.go:31] will retry after 742.712658ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.615362344Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182541615331712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80a38d52-be14-40cc-854c-eb53c86f5313 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.616397018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e36cd0d0-d547-4ed1-9668-38136776928a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.616473964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e36cd0d0-d547-4ed1-9668-38136776928a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.616743607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f18a686c2ded98388af30bee85e8a5f3b6e2446fa37496a4ceea8949072836d,PodSandboxId:3fc741fec0fdb0783cb37a8a0ff71e22d28bd7663b3ea094a37f2e22ce431883,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181727823644618,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742baa9b-d48e-4be9-8c33-64d42e1ff169,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ee513a721cdef30ecd3f51ebea2df9235862fb847cb891640cefb4ac6edecf,PodSandboxId:53336dc0c8a6326312f116c4f3ce9c3647c56a2eb68f3df197215a01f1c6276a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726704726585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5lstd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b74210-7395-4a48-8e1b-b49fb2faea43,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f520e58db1d489bffd43f419e8e6e031d0057ee6a826d4ae6b04dda73b06cf37,PodSandboxId:ee2dbce30242fea4ef282127806402d3f905498ff314804f112782a237637258,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726590376686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tn5jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48
2276d3-bfe2-4538-9dfe-a2a87a02182c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc3b9c490ff36d5b586cf0c5325e00bab05e22fb8939f2a8e55014fe5d917af,PodSandboxId:8c940c103c9a8fc9a970c9dda621791325ab427c7960e38b256fcde03691b213,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721181725861846880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gl7th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320d9fae-f5b8-47bd-afc0-88e07e23157a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24516073158b7e84c967e58397bde021ef567c50d65b73a8726840a54140aa7,PodSandboxId:df3f228f44f4011e0be5e39f35c27aed6c9bfd18f41972a7c2fa95e43150cd8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721181714781824969,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 344acf7ebdaa0c036f41562765095ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7528a2702168862e7a35193d04849f0162f3a3584a0093be8e9062c8f4cfd736,PodSandboxId:f2f2dc234d47b3355cbddf229124d5891d347274c93c501ba7e9e11cb42d2a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721181714741409744,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e64be7ee0f24a232f3758d919f454e09,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc815ffb334b863e88dba396a6e7448f3629bcb9fcfaa8f4be8928dc60d1ac0,PodSandboxId:0be2cbf7dc74177df671548dde0e9aae7d85f208977985af47c9902547f1fec9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721181714672241645,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618d36b0a982d3b72b5fae6920d7ecb87cf6f321d739d7ca78c42dd4a4807c8c,PodSandboxId:d12d41c04023ba058550ea77bfb064eb0526233f4480de672922d23db0209356,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721181714628410717,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3c9ab6639b2fe2032b78777103debd4,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59e9bc3378bf74596204bfe5c7bd232684b64be74f90b5dbae477205a2b4dea,PodSandboxId:6d62d4047e73fe1c1afd33d64a404d919c4c1d824c8fb46a7ab879ce3186483c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721181381945218515,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e36cd0d0-d547-4ed1-9668-38136776928a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.657701109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c5f135e-4209-44f5-a475-9e0aa056dd4d name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.657830627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c5f135e-4209-44f5-a475-9e0aa056dd4d name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.659297643Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9878f7a-3bc5-4fce-a82e-20dbd331455c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.659904034Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182541659877147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9878f7a-3bc5-4fce-a82e-20dbd331455c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.660404293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=249fac0f-e5e7-4797-8706-8d36a8664188 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.660484896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=249fac0f-e5e7-4797-8706-8d36a8664188 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.660789966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f18a686c2ded98388af30bee85e8a5f3b6e2446fa37496a4ceea8949072836d,PodSandboxId:3fc741fec0fdb0783cb37a8a0ff71e22d28bd7663b3ea094a37f2e22ce431883,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181727823644618,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742baa9b-d48e-4be9-8c33-64d42e1ff169,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ee513a721cdef30ecd3f51ebea2df9235862fb847cb891640cefb4ac6edecf,PodSandboxId:53336dc0c8a6326312f116c4f3ce9c3647c56a2eb68f3df197215a01f1c6276a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726704726585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5lstd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b74210-7395-4a48-8e1b-b49fb2faea43,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f520e58db1d489bffd43f419e8e6e031d0057ee6a826d4ae6b04dda73b06cf37,PodSandboxId:ee2dbce30242fea4ef282127806402d3f905498ff314804f112782a237637258,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726590376686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tn5jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48
2276d3-bfe2-4538-9dfe-a2a87a02182c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc3b9c490ff36d5b586cf0c5325e00bab05e22fb8939f2a8e55014fe5d917af,PodSandboxId:8c940c103c9a8fc9a970c9dda621791325ab427c7960e38b256fcde03691b213,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721181725861846880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gl7th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320d9fae-f5b8-47bd-afc0-88e07e23157a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24516073158b7e84c967e58397bde021ef567c50d65b73a8726840a54140aa7,PodSandboxId:df3f228f44f4011e0be5e39f35c27aed6c9bfd18f41972a7c2fa95e43150cd8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721181714781824969,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 344acf7ebdaa0c036f41562765095ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7528a2702168862e7a35193d04849f0162f3a3584a0093be8e9062c8f4cfd736,PodSandboxId:f2f2dc234d47b3355cbddf229124d5891d347274c93c501ba7e9e11cb42d2a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721181714741409744,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e64be7ee0f24a232f3758d919f454e09,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc815ffb334b863e88dba396a6e7448f3629bcb9fcfaa8f4be8928dc60d1ac0,PodSandboxId:0be2cbf7dc74177df671548dde0e9aae7d85f208977985af47c9902547f1fec9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721181714672241645,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618d36b0a982d3b72b5fae6920d7ecb87cf6f321d739d7ca78c42dd4a4807c8c,PodSandboxId:d12d41c04023ba058550ea77bfb064eb0526233f4480de672922d23db0209356,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721181714628410717,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3c9ab6639b2fe2032b78777103debd4,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59e9bc3378bf74596204bfe5c7bd232684b64be74f90b5dbae477205a2b4dea,PodSandboxId:6d62d4047e73fe1c1afd33d64a404d919c4c1d824c8fb46a7ab879ce3186483c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721181381945218515,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=249fac0f-e5e7-4797-8706-8d36a8664188 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.700112186Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25260f08-a8f7-479d-9a6b-e198a59bb930 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.700201716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25260f08-a8f7-479d-9a6b-e198a59bb930 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.701308091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88b6c343-09a2-41bf-aae0-e4bcf98487c0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.701884346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182541701855124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88b6c343-09a2-41bf-aae0-e4bcf98487c0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.702499556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f3ad8ce-2c8f-4e97-b74f-bab3a39ad6cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.702620473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f3ad8ce-2c8f-4e97-b74f-bab3a39ad6cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.702892552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f18a686c2ded98388af30bee85e8a5f3b6e2446fa37496a4ceea8949072836d,PodSandboxId:3fc741fec0fdb0783cb37a8a0ff71e22d28bd7663b3ea094a37f2e22ce431883,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181727823644618,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742baa9b-d48e-4be9-8c33-64d42e1ff169,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ee513a721cdef30ecd3f51ebea2df9235862fb847cb891640cefb4ac6edecf,PodSandboxId:53336dc0c8a6326312f116c4f3ce9c3647c56a2eb68f3df197215a01f1c6276a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726704726585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5lstd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b74210-7395-4a48-8e1b-b49fb2faea43,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f520e58db1d489bffd43f419e8e6e031d0057ee6a826d4ae6b04dda73b06cf37,PodSandboxId:ee2dbce30242fea4ef282127806402d3f905498ff314804f112782a237637258,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726590376686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tn5jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48
2276d3-bfe2-4538-9dfe-a2a87a02182c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc3b9c490ff36d5b586cf0c5325e00bab05e22fb8939f2a8e55014fe5d917af,PodSandboxId:8c940c103c9a8fc9a970c9dda621791325ab427c7960e38b256fcde03691b213,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721181725861846880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gl7th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320d9fae-f5b8-47bd-afc0-88e07e23157a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24516073158b7e84c967e58397bde021ef567c50d65b73a8726840a54140aa7,PodSandboxId:df3f228f44f4011e0be5e39f35c27aed6c9bfd18f41972a7c2fa95e43150cd8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721181714781824969,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 344acf7ebdaa0c036f41562765095ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7528a2702168862e7a35193d04849f0162f3a3584a0093be8e9062c8f4cfd736,PodSandboxId:f2f2dc234d47b3355cbddf229124d5891d347274c93c501ba7e9e11cb42d2a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721181714741409744,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e64be7ee0f24a232f3758d919f454e09,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc815ffb334b863e88dba396a6e7448f3629bcb9fcfaa8f4be8928dc60d1ac0,PodSandboxId:0be2cbf7dc74177df671548dde0e9aae7d85f208977985af47c9902547f1fec9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721181714672241645,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618d36b0a982d3b72b5fae6920d7ecb87cf6f321d739d7ca78c42dd4a4807c8c,PodSandboxId:d12d41c04023ba058550ea77bfb064eb0526233f4480de672922d23db0209356,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721181714628410717,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3c9ab6639b2fe2032b78777103debd4,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59e9bc3378bf74596204bfe5c7bd232684b64be74f90b5dbae477205a2b4dea,PodSandboxId:6d62d4047e73fe1c1afd33d64a404d919c4c1d824c8fb46a7ab879ce3186483c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721181381945218515,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f3ad8ce-2c8f-4e97-b74f-bab3a39ad6cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.737851188Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89804659-aed4-453f-a19b-79ebeb53b716 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.737921726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89804659-aed4-453f-a19b-79ebeb53b716 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.739084613Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72c057af-39f2-4a4a-8280-3cecbb44ea0a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.739404264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182541739382773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72c057af-39f2-4a4a-8280-3cecbb44ea0a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.739970014Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ac4014e-dd6b-4d70-b9e5-0edab4170547 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.740042320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ac4014e-dd6b-4d70-b9e5-0edab4170547 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:41 no-preload-391501 crio[717]: time="2024-07-17 02:15:41.740246154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0f18a686c2ded98388af30bee85e8a5f3b6e2446fa37496a4ceea8949072836d,PodSandboxId:3fc741fec0fdb0783cb37a8a0ff71e22d28bd7663b3ea094a37f2e22ce431883,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721181727823644618,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742baa9b-d48e-4be9-8c33-64d42e1ff169,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86ee513a721cdef30ecd3f51ebea2df9235862fb847cb891640cefb4ac6edecf,PodSandboxId:53336dc0c8a6326312f116c4f3ce9c3647c56a2eb68f3df197215a01f1c6276a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726704726585,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-5lstd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71b74210-7395-4a48-8e1b-b49fb2faea43,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f520e58db1d489bffd43f419e8e6e031d0057ee6a826d4ae6b04dda73b06cf37,PodSandboxId:ee2dbce30242fea4ef282127806402d3f905498ff314804f112782a237637258,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721181726590376686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tn5jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48
2276d3-bfe2-4538-9dfe-a2a87a02182c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc3b9c490ff36d5b586cf0c5325e00bab05e22fb8939f2a8e55014fe5d917af,PodSandboxId:8c940c103c9a8fc9a970c9dda621791325ab427c7960e38b256fcde03691b213,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721181725861846880,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gl7th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 320d9fae-f5b8-47bd-afc0-88e07e23157a,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24516073158b7e84c967e58397bde021ef567c50d65b73a8726840a54140aa7,PodSandboxId:df3f228f44f4011e0be5e39f35c27aed6c9bfd18f41972a7c2fa95e43150cd8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721181714781824969,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 344acf7ebdaa0c036f41562765095ccb,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7528a2702168862e7a35193d04849f0162f3a3584a0093be8e9062c8f4cfd736,PodSandboxId:f2f2dc234d47b3355cbddf229124d5891d347274c93c501ba7e9e11cb42d2a51,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721181714741409744,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e64be7ee0f24a232f3758d919f454e09,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc815ffb334b863e88dba396a6e7448f3629bcb9fcfaa8f4be8928dc60d1ac0,PodSandboxId:0be2cbf7dc74177df671548dde0e9aae7d85f208977985af47c9902547f1fec9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721181714672241645,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:618d36b0a982d3b72b5fae6920d7ecb87cf6f321d739d7ca78c42dd4a4807c8c,PodSandboxId:d12d41c04023ba058550ea77bfb064eb0526233f4480de672922d23db0209356,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721181714628410717,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3c9ab6639b2fe2032b78777103debd4,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59e9bc3378bf74596204bfe5c7bd232684b64be74f90b5dbae477205a2b4dea,PodSandboxId:6d62d4047e73fe1c1afd33d64a404d919c4c1d824c8fb46a7ab879ce3186483c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721181381945218515,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-391501,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b898b4cfb281d3535c0088e58000445,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ac4014e-dd6b-4d70-b9e5-0edab4170547 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0f18a686c2ded       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   3fc741fec0fdb       storage-provisioner
	86ee513a721cd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   53336dc0c8a63       coredns-5cfdc65f69-5lstd
	f520e58db1d48       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   ee2dbce30242f       coredns-5cfdc65f69-tn5jv
	5dc3b9c490ff3       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   13 minutes ago      Running             kube-proxy                0                   8c940c103c9a8       kube-proxy-gl7th
	d24516073158b       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   13 minutes ago      Running             kube-scheduler            2                   df3f228f44f40       kube-scheduler-no-preload-391501
	7528a27021688       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   13 minutes ago      Running             etcd                      2                   f2f2dc234d47b       etcd-no-preload-391501
	4bc815ffb334b       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   13 minutes ago      Running             kube-apiserver            3                   0be2cbf7dc741       kube-apiserver-no-preload-391501
	618d36b0a982d       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   13 minutes ago      Running             kube-controller-manager   3                   d12d41c04023b       kube-controller-manager-no-preload-391501
	d59e9bc3378bf       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   19 minutes ago      Exited              kube-apiserver            2                   6d62d4047e73f       kube-apiserver-no-preload-391501
	
	
	==> coredns [86ee513a721cdef30ecd3f51ebea2df9235862fb847cb891640cefb4ac6edecf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f520e58db1d489bffd43f419e8e6e031d0057ee6a826d4ae6b04dda73b06cf37] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-391501
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-391501
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=no-preload-391501
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T02_02_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 02:01:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-391501
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:15:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:12:24 +0000   Wed, 17 Jul 2024 02:01:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:12:24 +0000   Wed, 17 Jul 2024 02:01:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:12:24 +0000   Wed, 17 Jul 2024 02:01:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:12:24 +0000   Wed, 17 Jul 2024 02:01:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.174
	  Hostname:    no-preload-391501
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 09e77312cf804798aea80962cc815545
	  System UUID:                09e77312-cf80-4798-aea8-0962cc815545
	  Boot ID:                    78d77276-8a10-44e3-ab68-d9595b634af9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-5lstd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-5cfdc65f69-tn5jv                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-391501                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-391501             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-391501    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-gl7th                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-391501             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-78fcd8795b-tnrht              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-391501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-391501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-391501 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-391501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-391501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-391501 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-391501 event: Registered Node no-preload-391501 in Controller
	
	
	==> dmesg <==
	[  +0.040220] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662883] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.271615] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.583388] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.419636] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.054949] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060604] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.186776] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.159208] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.311702] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[ +15.427560] systemd-fstab-generator[1170]: Ignoring "noauto" option for root device
	[  +0.065053] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.585070] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[Jul17 01:56] kauditd_printk_skb: 90 callbacks suppressed
	[ +26.359295] kauditd_printk_skb: 85 callbacks suppressed
	[Jul17 02:01] kauditd_printk_skb: 1 callbacks suppressed
	[ +12.831185] systemd-fstab-generator[3061]: Ignoring "noauto" option for root device
	[  +0.064067] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.484799] systemd-fstab-generator[3391]: Ignoring "noauto" option for root device
	[  +0.097050] kauditd_printk_skb: 55 callbacks suppressed
	[Jul17 02:02] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.265336] systemd-fstab-generator[3593]: Ignoring "noauto" option for root device
	[  +6.592664] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [7528a2702168862e7a35193d04849f0162f3a3584a0093be8e9062c8f4cfd736] <==
	{"level":"info","ts":"2024-07-17T02:01:55.11539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 switched to configuration voters=(3279157608688714916)"}
	{"level":"info","ts":"2024-07-17T02:01:55.11671Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"98a332d8ef0073ef","local-member-id":"2d81e878ac6904a4","added-peer-id":"2d81e878ac6904a4","added-peer-peer-urls":["https://192.168.61.174:2380"]}
	{"level":"info","ts":"2024-07-17T02:01:56.050642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T02:01:56.050696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T02:01:56.050713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 received MsgPreVoteResp from 2d81e878ac6904a4 at term 1"}
	{"level":"info","ts":"2024-07-17T02:01:56.050726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T02:01:56.050732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 received MsgVoteResp from 2d81e878ac6904a4 at term 2"}
	{"level":"info","ts":"2024-07-17T02:01:56.050743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2d81e878ac6904a4 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T02:01:56.05075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2d81e878ac6904a4 elected leader 2d81e878ac6904a4 at term 2"}
	{"level":"info","ts":"2024-07-17T02:01:56.053088Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2d81e878ac6904a4","local-member-attributes":"{Name:no-preload-391501 ClientURLs:[https://192.168.61.174:2379]}","request-path":"/0/members/2d81e878ac6904a4/attributes","cluster-id":"98a332d8ef0073ef","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T02:01:56.053367Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T02:01:56.053654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T02:01:56.054047Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T02:01:56.055945Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T02:01:56.056585Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T02:01:56.056621Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T02:01:56.057159Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T02:01:56.057691Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.174:2379"}
	{"level":"info","ts":"2024-07-17T02:01:56.057894Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T02:01:56.058513Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"98a332d8ef0073ef","local-member-id":"2d81e878ac6904a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T02:01:56.058704Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T02:01:56.058767Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T02:11:56.084165Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":721}
	{"level":"info","ts":"2024-07-17T02:11:56.094177Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":721,"took":"8.992098ms","hash":1734711849,"current-db-size-bytes":2371584,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2371584,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-17T02:11:56.094268Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1734711849,"revision":721,"compact-revision":-1}
	
	
	==> kernel <==
	 02:15:42 up 20 min,  0 users,  load average: 0.16, 0.17, 0.15
	Linux no-preload-391501 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4bc815ffb334b863e88dba396a6e7448f3629bcb9fcfaa8f4be8928dc60d1ac0] <==
	E0717 02:11:58.613615       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0717 02:11:58.613685       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 02:11:58.614929       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 02:11:58.615003       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:12:58.615419       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 02:12:58.615467       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0717 02:12:58.615421       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 02:12:58.615590       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 02:12:58.616872       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 02:12:58.616936       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 02:14:58.617785       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 02:14:58.618172       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0717 02:14:58.617790       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 02:14:58.618282       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0717 02:14:58.620094       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 02:14:58.620099       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d59e9bc3378bf74596204bfe5c7bd232684b64be74f90b5dbae477205a2b4dea] <==
	W0717 02:01:48.807834       1 logging.go:55] [core] [Channel #39 SubChannel #40]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.816361       1 logging.go:55] [core] [Channel #63 SubChannel #64]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.852056       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.899658       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.904981       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.927091       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:48.953825       1 logging.go:55] [core] [Channel #60 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.083948       1 logging.go:55] [core] [Channel #57 SubChannel #58]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.101102       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.147039       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.150378       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.159982       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.393949       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.404492       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.446090       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.537252       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.584864       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:49.621137       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.047346       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.098914       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.112837       1 logging.go:55] [core] [Channel #24 SubChannel #25]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.162222       1 logging.go:55] [core] [Channel #36 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.228851       1 logging.go:55] [core] [Channel #21 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.334419       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 02:01:50.592200       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [618d36b0a982d3b72b5fae6920d7ecb87cf6f321d739d7ca78c42dd4a4807c8c] <==
	E0717 02:10:35.502683       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:10:35.565192       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:11:05.512165       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:11:05.573253       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:11:35.519848       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:11:35.585525       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:12:05.526878       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:12:05.593795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:12:24.094663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-391501"
	E0717 02:12:35.533795       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:12:35.605536       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:13:05.543926       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:13:05.620353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 02:13:06.392799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="165.521µs"
	I0717 02:13:18.393489       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="203.27µs"
	E0717 02:13:35.551458       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:13:35.628965       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:14:05.558190       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:14:05.637625       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:14:35.564510       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:14:35.646349       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:15:05.573526       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:15:05.656765       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 02:15:35.580971       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 02:15:35.666318       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5dc3b9c490ff36d5b586cf0c5325e00bab05e22fb8939f2a8e55014fe5d917af] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 02:02:06.508259       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 02:02:06.534276       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.61.174"]
	E0717 02:02:06.534379       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 02:02:06.626308       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 02:02:06.626382       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 02:02:06.626427       1 server_linux.go:170] "Using iptables Proxier"
	I0717 02:02:06.631457       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 02:02:06.631960       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 02:02:06.631999       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 02:02:06.639252       1 config.go:197] "Starting service config controller"
	I0717 02:02:06.639301       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 02:02:06.639344       1 config.go:104] "Starting endpoint slice config controller"
	I0717 02:02:06.639353       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 02:02:06.646610       1 config.go:326] "Starting node config controller"
	I0717 02:02:06.648378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 02:02:06.739657       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 02:02:06.739699       1 shared_informer.go:320] Caches are synced for service config
	I0717 02:02:06.758641       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d24516073158b7e84c967e58397bde021ef567c50d65b73a8726840a54140aa7] <==
	E0717 02:01:57.637279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0717 02:01:57.632394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:57.639717       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 02:01:57.639815       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0717 02:01:58.463224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 02:01:58.463274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.471101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 02:01:58.471299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.475216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 02:01:58.475392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.484419       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 02:01:58.484501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.607244       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 02:01:58.607376       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0717 02:01:58.667304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 02:01:58.667355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.757455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 02:01:58.757508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.803860       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 02:01:58.804064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.845739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 02:01:58.845796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 02:01:58.877090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 02:01:58.877144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0717 02:02:01.114723       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 02:13:00 no-preload-391501 kubelet[3397]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:13:00 no-preload-391501 kubelet[3397]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:13:00 no-preload-391501 kubelet[3397]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:13:06 no-preload-391501 kubelet[3397]: E0717 02:13:06.374525    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:13:18 no-preload-391501 kubelet[3397]: E0717 02:13:18.372350    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:13:32 no-preload-391501 kubelet[3397]: E0717 02:13:32.372522    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:13:44 no-preload-391501 kubelet[3397]: E0717 02:13:44.372211    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:13:57 no-preload-391501 kubelet[3397]: E0717 02:13:57.371912    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:14:00 no-preload-391501 kubelet[3397]: E0717 02:14:00.400731    3397 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:14:00 no-preload-391501 kubelet[3397]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:14:00 no-preload-391501 kubelet[3397]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:14:00 no-preload-391501 kubelet[3397]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:14:00 no-preload-391501 kubelet[3397]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:14:12 no-preload-391501 kubelet[3397]: E0717 02:14:12.373373    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:14:23 no-preload-391501 kubelet[3397]: E0717 02:14:23.371364    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:14:34 no-preload-391501 kubelet[3397]: E0717 02:14:34.371914    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:14:49 no-preload-391501 kubelet[3397]: E0717 02:14:49.371444    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:15:00 no-preload-391501 kubelet[3397]: E0717 02:15:00.401158    3397 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 02:15:00 no-preload-391501 kubelet[3397]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 02:15:00 no-preload-391501 kubelet[3397]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 02:15:00 no-preload-391501 kubelet[3397]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 02:15:00 no-preload-391501 kubelet[3397]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 02:15:04 no-preload-391501 kubelet[3397]: E0717 02:15:04.372125    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:15:17 no-preload-391501 kubelet[3397]: E0717 02:15:17.372287    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	Jul 17 02:15:29 no-preload-391501 kubelet[3397]: E0717 02:15:29.371462    3397 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-tnrht" podUID="af70d47e-8e45-4e5d-bceb-e01a6c1851ff"
	
	
	==> storage-provisioner [0f18a686c2ded98388af30bee85e8a5f3b6e2446fa37496a4ceea8949072836d] <==
	I0717 02:02:07.928946       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 02:02:07.938070       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 02:02:07.938123       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 02:02:07.958702       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 02:02:07.958867       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-391501_824b21b9-6595-4a43-8430-09b988a3df19!
	I0717 02:02:07.969325       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c12a90b4-fb97-4132-86c3-46a7bab25a56", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-391501_824b21b9-6595-4a43-8430-09b988a3df19 became leader
	I0717 02:02:08.059354       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-391501_824b21b9-6595-4a43-8430-09b988a3df19!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-391501 -n no-preload-391501
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-391501 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-tnrht
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-391501 describe pod metrics-server-78fcd8795b-tnrht
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-391501 describe pod metrics-server-78fcd8795b-tnrht: exit status 1 (65.285481ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-tnrht" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-391501 describe pod metrics-server-78fcd8795b-tnrht: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (263.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (141.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:14:39.021814   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/calico-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:15:03.264641   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:15:14.902042   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
E0717 02:15:17.179461   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.44:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.44:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-901761 -n old-k8s-version-901761
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 2 (235.593523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-901761" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-901761 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-901761 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.813µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-901761 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 2 (227.74122ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-901761 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-901761 logs -n 25: (1.552671224s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-894370 sudo cat                              | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo                                  | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo find                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-894370 sudo crio                             | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-894370                                       | bridge-894370                | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	| delete  | -p                                                     | disable-driver-mounts-255698 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | disable-driver-mounts-255698                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:48 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-940222            | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC | 17 Jul 24 01:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:47 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-738184  | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-391501             | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC | 17 Jul 24 01:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-391501                                   | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:48 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-940222                 | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-901761        | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-940222                                  | embed-certs-940222           | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC | 17 Jul 24 02:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-738184       | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:50 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-391501                  | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-738184 | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:59 UTC |
	|         | default-k8s-diff-port-738184                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| start   | -p no-preload-391501 --memory=2200                     | no-preload-391501            | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 02:02 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-901761             | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC | 17 Jul 24 01:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-901761                              | old-k8s-version-901761       | jenkins | v1.33.1 | 17 Jul 24 01:51 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 01:51:47
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 01:51:47.395737   71929 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:51:47.396000   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396010   71929 out.go:304] Setting ErrFile to fd 2...
	I0717 01:51:47.396016   71929 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:51:47.396184   71929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:51:47.396684   71929 out.go:298] Setting JSON to false
	I0717 01:51:47.397549   71929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5649,"bootTime":1721175458,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:51:47.397606   71929 start.go:139] virtualization: kvm guest
	I0717 01:51:47.399758   71929 out.go:177] * [old-k8s-version-901761] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:51:47.400960   71929 notify.go:220] Checking for updates...
	I0717 01:51:47.400966   71929 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:51:47.402266   71929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:51:47.403356   71929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:51:47.404532   71929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:51:47.405524   71929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:51:47.406572   71929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:51:47.407935   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:51:47.408358   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.408427   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.422931   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0717 01:51:47.423315   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.423809   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.423831   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.424123   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.424259   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.426227   71929 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 01:51:47.427500   71929 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:51:47.427770   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:51:47.427801   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:51:47.442080   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
	I0717 01:51:47.442438   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:51:47.442901   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:51:47.442924   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:51:47.443208   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:51:47.443382   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:51:47.476327   71929 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 01:51:47.477607   71929 start.go:297] selected driver: kvm2
	I0717 01:51:47.477620   71929 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.477762   71929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:51:47.478432   71929 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.478541   71929 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 01:51:47.493611   71929 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 01:51:47.493967   71929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:51:47.494039   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:51:47.494056   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:51:47.494147   71929 start.go:340] cluster config:
	{Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:51:47.494271   71929 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 01:51:47.496056   71929 out.go:177] * Starting "old-k8s-version-901761" primary control-plane node in "old-k8s-version-901761" cluster
	I0717 01:51:45.178864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:47.497229   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:51:47.497266   71929 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 01:51:47.497279   71929 cache.go:56] Caching tarball of preloaded images
	I0717 01:51:47.497368   71929 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 01:51:47.497379   71929 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 01:51:47.497484   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:51:47.497671   71929 start.go:360] acquireMachinesLock for old-k8s-version-901761: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:51:51.258826   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:51:54.330879   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:00.410811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:03.482811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:09.562828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:12.634828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:18.714910   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:21.786892   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:27.866863   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:30.938805   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:37.022827   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:40.090853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:46.170839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:49.242854   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:55.322824   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:52:58.394792   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:04.474811   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:07.546855   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:13.626861   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:16.698832   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:22.778828   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:25.850864   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:31.930814   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:35.002842   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:41.082839   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:44.154796   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:50.234823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:53.306914   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:53:59.386835   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:02.458751   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:08.538853   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:11.610833   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:17.690816   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:20.762793   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:26.842837   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:29.914866   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:35.994838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:39.066806   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:45.146846   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:48.218841   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:54.298823   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:54:57.370838   71146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.225:22: connect: no route to host
	I0717 01:55:00.375050   71522 start.go:364] duration metric: took 3m54.700923144s to acquireMachinesLock for "default-k8s-diff-port-738184"
	I0717 01:55:00.375103   71522 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:00.375110   71522 fix.go:54] fixHost starting: 
	I0717 01:55:00.375500   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:00.375532   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:00.390583   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0717 01:55:00.390957   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:00.391392   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:00.391412   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:00.391704   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:00.391927   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:00.392069   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:00.393467   71522 fix.go:112] recreateIfNeeded on default-k8s-diff-port-738184: state=Stopped err=<nil>
	I0717 01:55:00.393508   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	W0717 01:55:00.393658   71522 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:00.395826   71522 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-738184" ...
	I0717 01:55:00.397256   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Start
	I0717 01:55:00.397401   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring networks are active...
	I0717 01:55:00.398079   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network default is active
	I0717 01:55:00.398390   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Ensuring network mk-default-k8s-diff-port-738184 is active
	I0717 01:55:00.398710   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Getting domain xml...
	I0717 01:55:00.399275   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Creating domain...
	I0717 01:55:00.372573   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:00.372621   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.372933   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:55:00.372957   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:55:00.373131   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:55:00.374934   71146 machine.go:97] duration metric: took 4m37.428393808s to provisionDockerMachine
	I0717 01:55:00.374969   71146 fix.go:56] duration metric: took 4m37.449104762s for fixHost
	I0717 01:55:00.374974   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 4m37.449121677s
	W0717 01:55:00.374996   71146 start.go:714] error starting host: provision: host is not running
	W0717 01:55:00.375080   71146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 01:55:00.375088   71146 start.go:729] Will try again in 5 seconds ...
	I0717 01:55:01.590292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting to get IP...
	I0717 01:55:01.591187   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591589   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.591657   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.591578   72583 retry.go:31] will retry after 266.165899ms: waiting for machine to come up
	I0717 01:55:01.859307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859724   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:01.859751   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:01.859695   72583 retry.go:31] will retry after 282.941451ms: waiting for machine to come up
	I0717 01:55:02.144389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144756   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.144787   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.144701   72583 retry.go:31] will retry after 327.203414ms: waiting for machine to come up
	I0717 01:55:02.473217   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473681   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:02.473705   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:02.473606   72583 retry.go:31] will retry after 553.917043ms: waiting for machine to come up
	I0717 01:55:03.029379   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029762   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.029783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.029738   72583 retry.go:31] will retry after 617.312209ms: waiting for machine to come up
	I0717 01:55:03.648372   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648701   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:03.648733   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:03.648670   72583 retry.go:31] will retry after 641.28503ms: waiting for machine to come up
	I0717 01:55:04.291493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.291986   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:04.292019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:04.291870   72583 retry.go:31] will retry after 1.133455116s: waiting for machine to come up
	I0717 01:55:05.426672   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426943   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:05.426972   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:05.426892   72583 retry.go:31] will retry after 1.00384113s: waiting for machine to come up
	I0717 01:55:05.376907   71146 start.go:360] acquireMachinesLock for embed-certs-940222: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 01:55:06.432146   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432502   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:06.432525   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:06.432477   72583 retry.go:31] will retry after 1.472142907s: waiting for machine to come up
	I0717 01:55:07.906974   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907407   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:07.907437   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:07.907336   72583 retry.go:31] will retry after 1.775986179s: waiting for machine to come up
	I0717 01:55:09.685396   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685792   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:09.685822   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:09.685756   72583 retry.go:31] will retry after 2.663700716s: waiting for machine to come up
	I0717 01:55:12.351616   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.351985   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:12.352017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:12.351921   72583 retry.go:31] will retry after 2.409004894s: waiting for machine to come up
	I0717 01:55:14.763493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763859   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | unable to find current IP address of domain default-k8s-diff-port-738184 in network mk-default-k8s-diff-port-738184
	I0717 01:55:14.763876   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | I0717 01:55:14.763828   72583 retry.go:31] will retry after 3.049843419s: waiting for machine to come up
	I0717 01:55:19.031713   71603 start.go:364] duration metric: took 4m8.751453112s to acquireMachinesLock for "no-preload-391501"
	I0717 01:55:19.031779   71603 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:19.031787   71603 fix.go:54] fixHost starting: 
	I0717 01:55:19.032306   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:19.032352   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:19.049376   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41829
	I0717 01:55:19.049877   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:19.050387   71603 main.go:141] libmachine: Using API Version  1
	I0717 01:55:19.050409   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:19.050752   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:19.050935   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:19.051104   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 01:55:19.052805   71603 fix.go:112] recreateIfNeeded on no-preload-391501: state=Stopped err=<nil>
	I0717 01:55:19.052832   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	W0717 01:55:19.052989   71603 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:19.056667   71603 out.go:177] * Restarting existing kvm2 VM for "no-preload-391501" ...
	I0717 01:55:19.058078   71603 main.go:141] libmachine: (no-preload-391501) Calling .Start
	I0717 01:55:19.058314   71603 main.go:141] libmachine: (no-preload-391501) Ensuring networks are active...
	I0717 01:55:19.059126   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network default is active
	I0717 01:55:19.059466   71603 main.go:141] libmachine: (no-preload-391501) Ensuring network mk-no-preload-391501 is active
	I0717 01:55:19.059958   71603 main.go:141] libmachine: (no-preload-391501) Getting domain xml...
	I0717 01:55:19.060746   71603 main.go:141] libmachine: (no-preload-391501) Creating domain...
	I0717 01:55:17.816307   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.816746   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Found IP for machine: 192.168.39.170
	I0717 01:55:17.816765   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserving static IP address...
	I0717 01:55:17.816776   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has current primary IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.817337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Reserved static IP address: 192.168.39.170
	I0717 01:55:17.817366   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Waiting for SSH to be available...
	I0717 01:55:17.817389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.817420   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | skip adding static IP to network mk-default-k8s-diff-port-738184 - found existing host DHCP lease matching {name: "default-k8s-diff-port-738184", mac: "52:54:00:e6:fe:fe", ip: "192.168.39.170"}
	I0717 01:55:17.817443   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Getting to WaitForSSH function...
	I0717 01:55:17.819693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820022   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.820056   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.820171   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH client type: external
	I0717 01:55:17.820203   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa (-rw-------)
	I0717 01:55:17.820245   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:17.820259   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | About to run SSH command:
	I0717 01:55:17.820280   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | exit 0
	I0717 01:55:17.942987   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:17.943370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetConfigRaw
	I0717 01:55:17.943945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:17.946638   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.946993   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.947021   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.947268   71522 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/config.json ...
	I0717 01:55:17.947479   71522 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:17.947497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:17.947732   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:17.950032   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:17.950397   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:17.950489   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:17.950664   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950827   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:17.950959   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:17.951108   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:17.951300   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:17.951311   71522 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:18.051147   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:18.051180   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051421   71522 buildroot.go:166] provisioning hostname "default-k8s-diff-port-738184"
	I0717 01:55:18.051456   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.051655   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.054480   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055024   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.055053   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.055262   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.055473   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055643   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.055783   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.055928   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.056077   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.056089   71522 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-738184 && echo "default-k8s-diff-port-738184" | sudo tee /etc/hostname
	I0717 01:55:18.170268   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-738184
	
	I0717 01:55:18.170299   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.173037   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173337   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.173369   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.173485   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.173673   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173851   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.173957   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.174110   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.174322   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.174349   71522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-738184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-738184/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-738184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:18.279963   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:18.279997   71522 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:18.280030   71522 buildroot.go:174] setting up certificates
	I0717 01:55:18.280042   71522 provision.go:84] configureAuth start
	I0717 01:55:18.280054   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetMachineName
	I0717 01:55:18.280393   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:18.282887   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283201   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.283231   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.283370   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.285399   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.285691   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.285795   71522 provision.go:143] copyHostCerts
	I0717 01:55:18.285865   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:18.285884   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:18.285971   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:18.286084   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:18.286095   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:18.286129   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:18.286205   71522 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:18.286214   71522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:18.286247   71522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:18.286313   71522 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-738184 san=[127.0.0.1 192.168.39.170 default-k8s-diff-port-738184 localhost minikube]
	I0717 01:55:18.386547   71522 provision.go:177] copyRemoteCerts
	I0717 01:55:18.386627   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:18.386658   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.388930   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389292   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.389322   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.389465   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.389662   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.389804   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.389944   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.469031   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:18.493607   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 01:55:18.517024   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:18.539757   71522 provision.go:87] duration metric: took 259.702663ms to configureAuth
	I0717 01:55:18.539793   71522 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:18.540064   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:18.540178   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.542831   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543174   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.543196   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.543388   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.543599   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.543843   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.544011   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.544172   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.544343   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.544362   71522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:18.804633   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:18.804690   71522 machine.go:97] duration metric: took 857.197634ms to provisionDockerMachine
	I0717 01:55:18.804706   71522 start.go:293] postStartSetup for "default-k8s-diff-port-738184" (driver="kvm2")
	I0717 01:55:18.804720   71522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:18.804743   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:18.805049   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:18.805073   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.807835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808127   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.808147   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.808319   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.808497   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.808670   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.808823   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:18.889297   71522 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:18.893587   71522 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:18.893615   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:18.893694   71522 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:18.893779   71522 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:18.893886   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:18.903319   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:18.927700   71522 start.go:296] duration metric: took 122.979492ms for postStartSetup
	I0717 01:55:18.927748   71522 fix.go:56] duration metric: took 18.552636525s for fixHost
	I0717 01:55:18.927775   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:18.930483   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.930768   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:18.930791   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:18.931004   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:18.931192   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:18.931511   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:18.931677   71522 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:18.931873   71522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0717 01:55:18.931887   71522 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:19.031515   71522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181319.004563133
	
	I0717 01:55:19.031541   71522 fix.go:216] guest clock: 1721181319.004563133
	I0717 01:55:19.031552   71522 fix.go:229] Guest: 2024-07-17 01:55:19.004563133 +0000 UTC Remote: 2024-07-17 01:55:18.927754613 +0000 UTC m=+253.390645105 (delta=76.80852ms)
	I0717 01:55:19.031611   71522 fix.go:200] guest clock delta is within tolerance: 76.80852ms
	I0717 01:55:19.031623   71522 start.go:83] releasing machines lock for "default-k8s-diff-port-738184", held for 18.656540342s
	I0717 01:55:19.031661   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.031940   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:19.034537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.034881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.034911   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.035036   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035557   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035750   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:19.035822   71522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:19.035875   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.036000   71522 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:19.036027   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:19.038509   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038860   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.038892   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038935   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.038982   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039156   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039328   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.039361   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:19.039389   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:19.039488   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.039537   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:19.039702   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:19.039835   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:19.040047   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:19.140208   71522 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:19.146454   71522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:19.293584   71522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:19.300750   71522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:19.300817   71522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:19.321596   71522 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:19.321621   71522 start.go:495] detecting cgroup driver to use...
	I0717 01:55:19.321684   71522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:19.337664   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:19.351856   71522 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:19.351922   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:19.366355   71522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:19.380735   71522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:19.495916   71522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:19.646426   71522 docker.go:233] disabling docker service ...
	I0717 01:55:19.646501   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:19.665764   71522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:19.683893   71522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:19.814704   71522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:19.958389   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:19.973223   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:19.992869   71522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:55:19.992937   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.003696   71522 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:20.003762   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.014415   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.025303   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.036715   71522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:20.047872   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.059666   71522 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.079479   71522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:20.092424   71522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:20.103225   71522 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:20.103284   71522 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:20.120620   71522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:20.136439   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:20.284796   71522 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:20.427605   71522 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:20.427698   71522 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:20.433477   71522 start.go:563] Will wait 60s for crictl version
	I0717 01:55:20.433537   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:55:20.437399   71522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:20.479192   71522 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:20.479289   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.507655   71522 ssh_runner.go:195] Run: crio --version
	I0717 01:55:20.537084   71522 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:55:20.538435   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetIP
	I0717 01:55:20.541200   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541493   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:20.541531   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:20.541772   71522 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:20.546261   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:20.559802   71522 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:20.559946   71522 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:55:20.560001   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:20.381503   71603 main.go:141] libmachine: (no-preload-391501) Waiting to get IP...
	I0717 01:55:20.382632   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.383105   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.383210   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.383077   72724 retry.go:31] will retry after 193.198351ms: waiting for machine to come up
	I0717 01:55:20.577611   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.578117   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.578145   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.578067   72724 retry.go:31] will retry after 254.406992ms: waiting for machine to come up
	I0717 01:55:20.834633   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:20.835088   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:20.835116   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:20.835057   72724 retry.go:31] will retry after 459.446617ms: waiting for machine to come up
	I0717 01:55:21.295939   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.296384   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.296409   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.296343   72724 retry.go:31] will retry after 515.654185ms: waiting for machine to come up
	I0717 01:55:21.813613   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:21.814140   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:21.814178   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:21.814104   72724 retry.go:31] will retry after 652.322198ms: waiting for machine to come up
	I0717 01:55:22.468223   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:22.468858   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:22.468897   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:22.468774   72724 retry.go:31] will retry after 767.220835ms: waiting for machine to come up
	I0717 01:55:23.237341   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:23.237685   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:23.237716   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:23.237633   72724 retry.go:31] will retry after 1.083873631s: waiting for machine to come up
	I0717 01:55:24.323463   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:24.323983   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:24.324011   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:24.323934   72724 retry.go:31] will retry after 1.255667305s: waiting for machine to come up
	I0717 01:55:20.597329   71522 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:55:20.597409   71522 ssh_runner.go:195] Run: which lz4
	I0717 01:55:20.602100   71522 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:55:20.606863   71522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:55:20.606900   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:55:22.053002   71522 crio.go:462] duration metric: took 1.450939378s to copy over tarball
	I0717 01:55:22.053071   71522 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:24.356349   71522 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.303245698s)
	I0717 01:55:24.356378   71522 crio.go:469] duration metric: took 2.303353381s to extract the tarball
	I0717 01:55:24.356385   71522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:55:24.402866   71522 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:24.446681   71522 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:55:24.446709   71522 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:55:24.446720   71522 kubeadm.go:934] updating node { 192.168.39.170 8444 v1.30.2 crio true true} ...
	I0717 01:55:24.446844   71522 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-738184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:24.446931   71522 ssh_runner.go:195] Run: crio config
	I0717 01:55:24.499717   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:24.499744   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:24.499759   71522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:24.499787   71522 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-738184 NodeName:default-k8s-diff-port-738184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:24.499965   71522 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-738184"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:24.500039   71522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:55:24.510488   71522 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:24.510568   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:24.520830   71522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 01:55:24.538018   71522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:55:24.556287   71522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 01:55:24.574973   71522 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:24.579058   71522 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:24.591752   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:24.712285   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:24.729387   71522 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184 for IP: 192.168.39.170
	I0717 01:55:24.729411   71522 certs.go:194] generating shared ca certs ...
	I0717 01:55:24.729432   71522 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:24.729596   71522 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:24.729650   71522 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:24.729662   71522 certs.go:256] generating profile certs ...
	I0717 01:55:24.729776   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/client.key
	I0717 01:55:24.729847   71522 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key.44902a6f
	I0717 01:55:24.729907   71522 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key
	I0717 01:55:24.730044   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:24.730086   71522 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:24.730099   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:24.730135   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:24.730183   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:24.730222   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:24.730277   71522 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:24.731142   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:24.762240   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:24.788746   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:24.825379   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:24.853821   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 01:55:24.887105   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:55:24.910834   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:24.934566   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/default-k8s-diff-port-738184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 01:55:24.959709   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:24.983722   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:25.007312   71522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:25.031576   71522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:25.049348   71522 ssh_runner.go:195] Run: openssl version
	I0717 01:55:25.055410   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:25.066104   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070616   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.070675   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:25.076604   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:25.087284   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:25.098383   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103262   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.103331   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:25.109170   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:25.119940   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:25.130829   71522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135659   71522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.135734   71522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:25.141583   71522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:25.152770   71522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:25.157395   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:25.163543   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:25.169580   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:25.175754   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:25.181771   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:25.187935   71522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:25.193614   71522 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-738184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-738184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:25.193727   71522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:25.193770   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.230871   71522 cri.go:89] found id: ""
	I0717 01:55:25.230954   71522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:25.241336   71522 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:25.241357   71522 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:25.241410   71522 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:25.251637   71522 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:25.253030   71522 kubeconfig.go:125] found "default-k8s-diff-port-738184" server: "https://192.168.39.170:8444"
	I0717 01:55:25.255926   71522 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:25.265878   71522 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.170
	I0717 01:55:25.265915   71522 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:25.265927   71522 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:25.265982   71522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:25.305929   71522 cri.go:89] found id: ""
	I0717 01:55:25.306015   71522 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:25.322581   71522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:25.332334   71522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:25.332356   71522 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:25.332407   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 01:55:25.342132   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:25.342193   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:25.351628   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 01:55:25.360765   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:25.360833   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:25.370167   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.379057   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:25.379124   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:25.389470   71522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 01:55:25.399142   71522 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:25.399210   71522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:25.409452   71522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:25.421509   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.545698   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:25.580838   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:25.581295   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:25.581322   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:25.581247   72724 retry.go:31] will retry after 1.354947672s: waiting for machine to come up
	I0717 01:55:26.937260   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:26.937746   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:26.937774   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:26.937696   72724 retry.go:31] will retry after 1.818074273s: waiting for machine to come up
	I0717 01:55:28.758015   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:28.758489   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:28.758517   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:28.758449   72724 retry.go:31] will retry after 2.782465023s: waiting for machine to come up
	I0717 01:55:26.599380   71522 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.053644988s)
	I0717 01:55:26.599416   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.807765   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.878767   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:26.965940   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:26.966023   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.466587   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.966138   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:27.983649   71522 api_server.go:72] duration metric: took 1.017709312s to wait for apiserver process to appear ...
	I0717 01:55:27.983678   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:55:27.983701   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:27.984214   71522 api_server.go:269] stopped: https://192.168.39.170:8444/healthz: Get "https://192.168.39.170:8444/healthz": dial tcp 192.168.39.170:8444: connect: connection refused
	I0717 01:55:28.483780   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.862416   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.862464   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.862479   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.869667   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:55:30.869718   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:55:30.983899   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:30.988670   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:30.988704   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.484233   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.488939   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:55:31.488978   71522 api_server.go:103] status: https://192.168.39.170:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:55:31.984611   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:55:31.988738   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:55:31.996182   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:55:31.996207   71522 api_server.go:131] duration metric: took 4.012523131s to wait for apiserver health ...
	I0717 01:55:31.996216   71522 cni.go:84] Creating CNI manager for ""
	I0717 01:55:31.996222   71522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:31.998122   71522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:55:31.999536   71522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:55:32.010501   71522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:55:32.030227   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:55:32.039923   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:55:32.039954   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039988   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:55:32.039998   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:55:32.040003   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:55:32.040013   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:55:32.040020   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 01:55:32.040033   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:55:32.040041   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:55:32.040046   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 01:55:32.040053   71522 system_pods.go:74] duration metric: took 9.802793ms to wait for pod list to return data ...
	I0717 01:55:32.040060   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:55:32.043233   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:55:32.043259   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:55:32.043270   71522 node_conditions.go:105] duration metric: took 3.202451ms to run NodePressure ...
	I0717 01:55:32.043285   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:32.350948   71522 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356119   71522 kubeadm.go:739] kubelet initialised
	I0717 01:55:32.356143   71522 kubeadm.go:740] duration metric: took 5.164025ms waiting for restarted kubelet to initialise ...
	I0717 01:55:32.356153   71522 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:32.361501   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.366747   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366770   71522 pod_ready.go:81] duration metric: took 5.246954ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.366778   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.366785   71522 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.371049   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371066   71522 pod_ready.go:81] duration metric: took 4.275157ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.371073   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.371078   71522 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.375338   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375361   71522 pod_ready.go:81] duration metric: took 4.27092ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.375369   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.375379   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.434545   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434583   71522 pod_ready.go:81] duration metric: took 59.196717ms for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.434593   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.434601   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:32.836139   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836178   71522 pod_ready.go:81] duration metric: took 401.568097ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:32.836194   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:32.836212   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.234032   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234060   71522 pod_ready.go:81] duration metric: took 397.83937ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.234071   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-proxy-c4n94" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.234076   71522 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:33.633953   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633981   71522 pod_ready.go:81] duration metric: took 399.893316ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:33.633992   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:33.633998   71522 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:34.034511   71522 pod_ready.go:97] node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034560   71522 pod_ready.go:81] duration metric: took 400.544281ms for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:55:34.034574   71522 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-738184" hosting pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:34.034583   71522 pod_ready.go:38] duration metric: took 1.678420144s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:34.034599   71522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:55:34.049235   71522 ops.go:34] apiserver oom_adj: -16
	I0717 01:55:34.049261   71522 kubeadm.go:597] duration metric: took 8.807897214s to restartPrimaryControlPlane
	I0717 01:55:34.049272   71522 kubeadm.go:394] duration metric: took 8.855664434s to StartCluster
	I0717 01:55:34.049292   71522 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.049374   71522 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:55:34.050992   71522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:34.051239   71522 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:55:34.051307   71522 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:55:34.051409   71522 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051454   71522 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051465   71522 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:55:34.051497   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051511   71522 config.go:182] Loaded profile config "default-k8s-diff-port-738184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:55:34.051498   71522 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051502   71522 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-738184"
	I0717 01:55:34.051564   71522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-738184"
	I0717 01:55:34.051587   71522 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.051612   71522 addons.go:243] addon metrics-server should already be in state true
	I0717 01:55:34.051686   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.051803   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.051845   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052097   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052151   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.052331   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.052383   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.054788   71522 out.go:177] * Verifying Kubernetes components...
	I0717 01:55:34.056293   71522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39845
	I0717 01:55:34.067345   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0717 01:55:34.067821   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.067911   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068370   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068390   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068515   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.068526   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0717 01:55:34.068535   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.068709   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.068991   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.068997   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.069278   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069320   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069529   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.069560   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.069611   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.069629   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.069977   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.070184   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.074013   71522 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-738184"
	W0717 01:55:34.074036   71522 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:55:34.074062   71522 host.go:66] Checking if "default-k8s-diff-port-738184" exists ...
	I0717 01:55:34.074422   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.074463   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.085256   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0717 01:55:34.085694   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0717 01:55:34.085716   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086207   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.086378   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086402   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.086785   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.086945   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.086947   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.086999   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.087327   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.087624   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.088695   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.089320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.090932   71522 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:55:34.090932   71522 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:31.543587   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:31.544073   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:31.544102   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:31.544012   72724 retry.go:31] will retry after 2.898539616s: waiting for machine to come up
	I0717 01:55:34.444315   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:34.444828   71603 main.go:141] libmachine: (no-preload-391501) DBG | unable to find current IP address of domain no-preload-391501 in network mk-no-preload-391501
	I0717 01:55:34.444870   71603 main.go:141] libmachine: (no-preload-391501) DBG | I0717 01:55:34.444790   72724 retry.go:31] will retry after 4.252719028s: waiting for machine to come up
	I0717 01:55:34.092892   71522 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.092910   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:55:34.092926   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.092985   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:55:34.092993   71522 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:55:34.093003   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.095340   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0717 01:55:34.095840   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.096397   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.096434   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.096567   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.096819   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.096979   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097029   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097058   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.097498   71522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:34.097536   71522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:34.097881   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097897   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.097899   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.097923   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.098075   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098105   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.098286   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098320   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.098449   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.098461   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.113190   71522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0717 01:55:34.113544   71522 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:34.114033   71522 main.go:141] libmachine: Using API Version  1
	I0717 01:55:34.114059   71522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:34.114375   71522 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:34.114575   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetState
	I0717 01:55:34.116332   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .DriverName
	I0717 01:55:34.116544   71522 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.116563   71522 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:55:34.116583   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHHostname
	I0717 01:55:34.119693   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.119992   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:fe:fe", ip: ""} in network mk-default-k8s-diff-port-738184: {Iface:virbr3 ExpiryTime:2024-07-17 02:55:10 +0000 UTC Type:0 Mac:52:54:00:e6:fe:fe Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:default-k8s-diff-port-738184 Clientid:01:52:54:00:e6:fe:fe}
	I0717 01:55:34.120017   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | domain default-k8s-diff-port-738184 has defined IP address 192.168.39.170 and MAC address 52:54:00:e6:fe:fe in network mk-default-k8s-diff-port-738184
	I0717 01:55:34.120457   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHPort
	I0717 01:55:34.120722   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHKeyPath
	I0717 01:55:34.120965   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .GetSSHUsername
	I0717 01:55:34.121652   71522 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/default-k8s-diff-port-738184/id_rsa Username:docker}
	I0717 01:55:34.247964   71522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:34.266521   71522 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:34.370296   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:55:34.370318   71522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:55:34.380102   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:55:34.394620   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:55:34.394639   71522 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:55:34.409328   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:55:34.416653   71522 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:34.416684   71522 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:55:34.445296   71522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:55:35.605781   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196419762s)
	I0717 01:55:35.605843   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605858   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605854   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.160520147s)
	I0717 01:55:35.605778   71522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.225640358s)
	I0717 01:55:35.605929   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.605944   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.605988   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606007   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606293   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606300   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606309   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606315   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606319   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606329   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606333   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606349   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.606357   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606367   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606371   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.606398   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.606410   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.606424   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.606640   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607811   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607852   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607866   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607874   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607892   71522 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-738184"
	I0717 01:55:35.607815   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.607878   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607829   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.607959   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.607842   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.613691   71522 main.go:141] libmachine: Making call to close driver server
	I0717 01:55:35.613717   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) Calling .Close
	I0717 01:55:35.614019   71522 main.go:141] libmachine: (default-k8s-diff-port-738184) DBG | Closing plugin on server side
	I0717 01:55:35.614025   71522 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:55:35.614081   71522 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:55:35.615871   71522 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0717 01:55:38.700025   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.700533   71603 main.go:141] libmachine: (no-preload-391501) Found IP for machine: 192.168.61.174
	I0717 01:55:38.700555   71603 main.go:141] libmachine: (no-preload-391501) Reserving static IP address...
	I0717 01:55:38.700572   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has current primary IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.701013   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.701033   71603 main.go:141] libmachine: (no-preload-391501) Reserved static IP address: 192.168.61.174
	I0717 01:55:38.701049   71603 main.go:141] libmachine: (no-preload-391501) DBG | skip adding static IP to network mk-no-preload-391501 - found existing host DHCP lease matching {name: "no-preload-391501", mac: "52:54:00:e6:6b:1b", ip: "192.168.61.174"}
	I0717 01:55:38.701064   71603 main.go:141] libmachine: (no-preload-391501) DBG | Getting to WaitForSSH function...
	I0717 01:55:38.701080   71603 main.go:141] libmachine: (no-preload-391501) Waiting for SSH to be available...
	I0717 01:55:38.703218   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703577   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.703605   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.703755   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH client type: external
	I0717 01:55:38.703773   71603 main.go:141] libmachine: (no-preload-391501) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa (-rw-------)
	I0717 01:55:38.703791   71603 main.go:141] libmachine: (no-preload-391501) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:38.703809   71603 main.go:141] libmachine: (no-preload-391501) DBG | About to run SSH command:
	I0717 01:55:38.703817   71603 main.go:141] libmachine: (no-preload-391501) DBG | exit 0
	I0717 01:55:38.827046   71603 main.go:141] libmachine: (no-preload-391501) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:38.827413   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetConfigRaw
	I0717 01:55:38.828102   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:38.831229   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.831782   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.831814   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.832140   71603 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/config.json ...
	I0717 01:55:38.832347   71603 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:38.832367   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:38.832574   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.835302   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835710   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.835735   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.835954   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.836173   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836345   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.836521   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.836691   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.836928   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.836947   71603 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:38.943173   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:38.943213   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943491   71603 buildroot.go:166] provisioning hostname "no-preload-391501"
	I0717 01:55:38.943513   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:38.943725   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:38.946396   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946872   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:38.946900   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:38.946980   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:38.947164   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947339   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:38.947518   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:38.947695   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:38.947849   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:38.947869   71603 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-391501 && echo "no-preload-391501" | sudo tee /etc/hostname
	I0717 01:55:39.070382   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-391501
	
	I0717 01:55:39.070429   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.073539   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.073904   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.073941   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.074203   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.074426   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074624   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.074880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.075132   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.075348   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.075373   71603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-391501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-391501/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-391501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:39.195604   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:39.195634   71603 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:39.195649   71603 buildroot.go:174] setting up certificates
	I0717 01:55:39.195656   71603 provision.go:84] configureAuth start
	I0717 01:55:39.195665   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetMachineName
	I0717 01:55:39.195952   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:39.198409   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198792   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.198822   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.198996   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.201509   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.201870   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.201901   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.202078   71603 provision.go:143] copyHostCerts
	I0717 01:55:39.202153   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:39.202166   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:39.202221   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:39.202313   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:39.202320   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:39.202339   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:39.202387   71603 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:39.202394   71603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:39.202410   71603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:39.202456   71603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.no-preload-391501 san=[127.0.0.1 192.168.61.174 localhost minikube no-preload-391501]
	I0717 01:55:39.550166   71603 provision.go:177] copyRemoteCerts
	I0717 01:55:39.550224   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:39.550249   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.552616   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.552990   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.553020   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.553135   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.553298   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.553460   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.553559   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:39.638467   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:39.664166   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:55:39.689416   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:55:39.714130   71603 provision.go:87] duration metric: took 518.463378ms to configureAuth
	I0717 01:55:39.714159   71603 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:39.714362   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 01:55:39.714440   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.717269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717694   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.717722   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.717880   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.718080   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718240   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.718393   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.718621   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:39.718793   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:39.718809   71603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:39.982066   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:39.982095   71603 machine.go:97] duration metric: took 1.149734372s to provisionDockerMachine
	I0717 01:55:39.982110   71603 start.go:293] postStartSetup for "no-preload-391501" (driver="kvm2")
	I0717 01:55:39.982127   71603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:39.982147   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:39.982429   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:39.982445   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:39.984935   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985232   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:39.985269   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:39.985372   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:39.985553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:39.985793   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:39.986010   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.074439   71603 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:40.079515   71603 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:40.079541   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:40.079617   71603 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:40.079708   71603 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:40.079831   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:40.090783   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:40.121212   71603 start.go:296] duration metric: took 139.087761ms for postStartSetup
	I0717 01:55:40.121257   71603 fix.go:56] duration metric: took 21.089468917s for fixHost
	I0717 01:55:40.121281   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.124208   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124517   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.124545   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.124753   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.124940   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125119   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.125269   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.125430   71603 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:40.125626   71603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I0717 01:55:40.125638   71603 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:40.239538   71929 start.go:364] duration metric: took 3m52.741834986s to acquireMachinesLock for "old-k8s-version-901761"
	I0717 01:55:40.239610   71929 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:40.239618   71929 fix.go:54] fixHost starting: 
	I0717 01:55:40.240021   71929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:40.240054   71929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:40.257464   71929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0717 01:55:40.257866   71929 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:40.258287   71929 main.go:141] libmachine: Using API Version  1
	I0717 01:55:40.258311   71929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:40.258672   71929 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:40.258871   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:40.259041   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetState
	I0717 01:55:40.260529   71929 fix.go:112] recreateIfNeeded on old-k8s-version-901761: state=Stopped err=<nil>
	I0717 01:55:40.260568   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	W0717 01:55:40.260721   71929 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:40.262590   71929 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-901761" ...
	I0717 01:55:35.617123   71522 addons.go:510] duration metric: took 1.565817066s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0717 01:55:36.270109   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:38.270489   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.270966   71522 node_ready.go:53] node "default-k8s-diff-port-738184" has status "Ready":"False"
	I0717 01:55:40.239384   71603 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181340.205508074
	
	I0717 01:55:40.239409   71603 fix.go:216] guest clock: 1721181340.205508074
	I0717 01:55:40.239419   71603 fix.go:229] Guest: 2024-07-17 01:55:40.205508074 +0000 UTC Remote: 2024-07-17 01:55:40.121261572 +0000 UTC m=+269.976034747 (delta=84.246502ms)
	I0717 01:55:40.239445   71603 fix.go:200] guest clock delta is within tolerance: 84.246502ms
	I0717 01:55:40.239453   71603 start.go:83] releasing machines lock for "no-preload-391501", held for 21.207695176s
	I0717 01:55:40.239486   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.239768   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:40.242534   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.242923   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.242956   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.243159   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243649   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243826   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 01:55:40.243924   71603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:40.243975   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.244045   71603 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:40.244071   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 01:55:40.246599   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.246958   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.246984   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247089   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247153   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247254   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.247401   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.247486   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:40.247510   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:40.247579   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.247669   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 01:55:40.247861   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 01:55:40.248031   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 01:55:40.248169   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 01:55:40.328497   71603 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:40.350092   71603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:40.497644   71603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:40.504094   71603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:40.504164   71603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:40.526752   71603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:40.526777   71603 start.go:495] detecting cgroup driver to use...
	I0717 01:55:40.526842   71603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:40.543537   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:40.557551   71603 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:40.557606   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:40.571755   71603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:40.585548   71603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:40.702991   71603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:40.849192   71603 docker.go:233] disabling docker service ...
	I0717 01:55:40.849276   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:40.864697   71603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:40.877940   71603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:41.043588   71603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:41.175359   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:41.191170   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:41.212440   71603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 01:55:41.212508   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.224335   71603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:41.224411   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.235721   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.247575   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.260018   71603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:41.271526   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.285999   71603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.307653   71603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:41.319272   71603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:41.330544   71603 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:41.330637   71603 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:41.346698   71603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:41.361983   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:41.490052   71603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:41.639509   71603 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:41.639626   71603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:41.646714   71603 start.go:563] Will wait 60s for crictl version
	I0717 01:55:41.646793   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.650900   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:55:41.688112   71603 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:55:41.688188   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.717335   71603 ssh_runner.go:195] Run: crio --version
	I0717 01:55:41.750767   71603 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 01:55:40.263857   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .Start
	I0717 01:55:40.264019   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring networks are active...
	I0717 01:55:40.264709   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network default is active
	I0717 01:55:40.265165   71929 main.go:141] libmachine: (old-k8s-version-901761) Ensuring network mk-old-k8s-version-901761 is active
	I0717 01:55:40.265581   71929 main.go:141] libmachine: (old-k8s-version-901761) Getting domain xml...
	I0717 01:55:40.266340   71929 main.go:141] libmachine: (old-k8s-version-901761) Creating domain...
	I0717 01:55:41.562582   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting to get IP...
	I0717 01:55:41.563329   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.563802   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.563890   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.563781   72905 retry.go:31] will retry after 216.264296ms: waiting for machine to come up
	I0717 01:55:41.781168   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:41.781662   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:41.781690   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:41.781629   72905 retry.go:31] will retry after 275.269814ms: waiting for machine to come up
	I0717 01:55:42.058127   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.058525   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.058564   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.058498   72905 retry.go:31] will retry after 348.024497ms: waiting for machine to come up
	I0717 01:55:41.752123   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetIP
	I0717 01:55:41.755114   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755571   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 01:55:41.755602   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 01:55:41.755863   71603 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 01:55:41.760869   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:41.775414   71603 kubeadm.go:883] updating cluster {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:55:41.775563   71603 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 01:55:41.775609   71603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:55:41.815115   71603 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 01:55:41.815141   71603 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.815241   71603 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.815279   71603 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.815290   71603 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:41.815207   71603 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.815304   71603 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 01:55:41.815239   71603 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.815258   71603 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817894   71603 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:41.817939   71603 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:41.817892   71603 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:41.817888   71603 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 01:55:41.818033   71603 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:41.817891   71603 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:41.817900   71603 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:41.817978   71603 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.014545   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 01:55:42.030064   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.034517   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.123584   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.130122   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.134935   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.136170   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.173650   71603 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 01:55:42.173707   71603 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.173718   71603 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 01:55:42.173755   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.173767   71603 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.173820   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.219689   71603 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 01:55:42.219745   71603 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.219792   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.240802   71603 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 01:55:42.240847   71603 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.240907   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.251152   71603 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 01:55:42.251189   71603 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.251225   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254790   71603 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 01:55:42.254849   71603 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.254886   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 01:55:42.254895   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:42.254916   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 01:55:42.254951   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 01:55:42.255006   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 01:55:42.257984   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 01:55:42.267440   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 01:55:42.395407   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395471   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395513   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395522   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.395558   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:42.395582   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 01:55:42.395592   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:42.395663   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:42.397740   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 01:55:42.397813   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:42.420577   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420602   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420619   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420640   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420662   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 01:55:42.420676   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:42.420705   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 01:55:42.420711   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 01:55:42.420738   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 01:55:43.737662   71603 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581683   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.160996964s)
	I0717 01:55:44.581730   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 01:55:44.581753   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581754   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.161058602s)
	I0717 01:55:44.581788   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 01:55:44.581810   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 01:55:44.581858   71603 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 01:55:44.581900   71603 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:44.581928   71603 ssh_runner.go:195] Run: which crictl
	I0717 01:55:41.270830   71522 node_ready.go:49] node "default-k8s-diff-port-738184" has status "Ready":"True"
	I0717 01:55:41.270853   71522 node_ready.go:38] duration metric: took 7.004304151s for node "default-k8s-diff-port-738184" to be "Ready" ...
	I0717 01:55:41.270868   71522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:55:41.278587   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285210   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.285236   71522 pod_ready.go:81] duration metric: took 6.623347ms for pod "coredns-7db6d8ff4d-9w26c" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.285250   71522 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291110   71522 pod_ready.go:92] pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.291133   71522 pod_ready.go:81] duration metric: took 5.874809ms for pod "coredns-7db6d8ff4d-js7sn" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.291145   71522 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297614   71522 pod_ready.go:92] pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:41.297636   71522 pod_ready.go:81] duration metric: took 6.483783ms for pod "etcd-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:41.297645   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305307   71522 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.305335   71522 pod_ready.go:81] duration metric: took 1.007681338s for pod "kube-apiserver-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.305349   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472190   71522 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.472222   71522 pod_ready.go:81] duration metric: took 166.864153ms for pod "kube-controller-manager-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.472236   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871756   71522 pod_ready.go:92] pod "kube-proxy-c4n94" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:42.871780   71522 pod_ready.go:81] duration metric: took 399.536375ms for pod "kube-proxy-c4n94" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:42.871789   71522 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272858   71522 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace has status "Ready":"True"
	I0717 01:55:43.272895   71522 pod_ready.go:81] duration metric: took 401.098971ms for pod "kube-scheduler-default-k8s-diff-port-738184" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:43.272913   71522 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	I0717 01:55:45.281019   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:42.407813   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.408311   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.408346   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.408218   72905 retry.go:31] will retry after 388.717436ms: waiting for machine to come up
	I0717 01:55:42.798810   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:42.799378   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:42.799411   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:42.799323   72905 retry.go:31] will retry after 661.391346ms: waiting for machine to come up
	I0717 01:55:43.462189   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:43.462654   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:43.462686   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:43.462603   72905 retry.go:31] will retry after 636.142497ms: waiting for machine to come up
	I0717 01:55:44.100416   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.100852   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.100874   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.100808   72905 retry.go:31] will retry after 781.652918ms: waiting for machine to come up
	I0717 01:55:44.883650   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:44.884137   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:44.884170   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:44.884088   72905 retry.go:31] will retry after 1.238608293s: waiting for machine to come up
	I0717 01:55:46.124419   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:46.124911   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:46.124942   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:46.124854   72905 retry.go:31] will retry after 1.169011508s: waiting for machine to come up
	I0717 01:55:47.295202   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:47.295679   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:47.295715   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:47.295632   72905 retry.go:31] will retry after 1.723987128s: waiting for machine to come up
	I0717 01:55:47.004929   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.423090292s)
	I0717 01:55:47.004968   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 01:55:47.004990   71603 ssh_runner.go:235] Completed: which crictl: (2.423045276s)
	I0717 01:55:47.005021   71603 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:47.005053   71603 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:55:47.005067   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 01:55:49.097703   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.092610651s)
	I0717 01:55:49.097747   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 01:55:49.097776   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097836   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 01:55:49.097776   71603 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.092700925s)
	I0717 01:55:49.097953   71603 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 01:55:49.098050   71603 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:47.781233   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.786039   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:49.020883   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:49.021363   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:49.021396   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:49.021279   72905 retry.go:31] will retry after 2.098481296s: waiting for machine to come up
	I0717 01:55:51.121693   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:51.122253   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:51.122282   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:51.122192   72905 retry.go:31] will retry after 2.624839429s: waiting for machine to come up
	I0717 01:55:50.560197   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.462322087s)
	I0717 01:55:50.560292   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 01:55:50.560323   71603 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560252   71603 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.462175943s)
	I0717 01:55:50.560373   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 01:55:50.560388   71603 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 01:55:53.630471   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.070071936s)
	I0717 01:55:53.630509   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 01:55:53.630529   71603 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:53.630604   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 01:55:52.280585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:54.779606   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:53.748796   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:53.749348   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | unable to find current IP address of domain old-k8s-version-901761 in network mk-old-k8s-version-901761
	I0717 01:55:53.749390   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | I0717 01:55:53.749298   72905 retry.go:31] will retry after 3.47930356s: waiting for machine to come up
	I0717 01:55:57.231901   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232407   71929 main.go:141] libmachine: (old-k8s-version-901761) Found IP for machine: 192.168.50.44
	I0717 01:55:57.232437   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has current primary IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.232449   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserving static IP address...
	I0717 01:55:57.232880   71929 main.go:141] libmachine: (old-k8s-version-901761) Reserved static IP address: 192.168.50.44
	I0717 01:55:57.232928   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.232937   71929 main.go:141] libmachine: (old-k8s-version-901761) Waiting for SSH to be available...
	I0717 01:55:57.232952   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | skip adding static IP to network mk-old-k8s-version-901761 - found existing host DHCP lease matching {name: "old-k8s-version-901761", mac: "52:54:00:8f:84:01", ip: "192.168.50.44"}
	I0717 01:55:57.232971   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Getting to WaitForSSH function...
	I0717 01:55:57.235007   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235208   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.235242   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.235421   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH client type: external
	I0717 01:55:57.235461   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa (-rw-------)
	I0717 01:55:57.235502   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:55:57.235516   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | About to run SSH command:
	I0717 01:55:57.235530   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | exit 0
	I0717 01:55:57.362619   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | SSH cmd err, output: <nil>: 
	I0717 01:55:57.363106   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetConfigRaw
	I0717 01:55:57.363760   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.366213   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366636   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.366666   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.366958   71929 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/config.json ...
	I0717 01:55:57.367165   71929 machine.go:94] provisionDockerMachine start ...
	I0717 01:55:57.367188   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:57.367392   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.370017   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370354   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.370371   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.370577   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.370765   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.370935   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.371084   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.371325   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.371506   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.371518   71929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:55:58.531714   71146 start.go:364] duration metric: took 53.154741813s to acquireMachinesLock for "embed-certs-940222"
	I0717 01:55:58.531773   71146 start.go:96] Skipping create...Using existing machine configuration
	I0717 01:55:58.531784   71146 fix.go:54] fixHost starting: 
	I0717 01:55:58.532189   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:55:58.532237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:55:58.549026   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I0717 01:55:58.549491   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:55:58.550001   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:55:58.550025   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:55:58.550363   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:55:58.550536   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:55:58.550707   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:55:58.552236   71146 fix.go:112] recreateIfNeeded on embed-certs-940222: state=Stopped err=<nil>
	I0717 01:55:58.552259   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	W0717 01:55:58.552397   71146 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 01:55:58.554487   71146 out.go:177] * Restarting existing kvm2 VM for "embed-certs-940222" ...
	I0717 01:55:57.478893   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:55:57.478921   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479123   71929 buildroot.go:166] provisioning hostname "old-k8s-version-901761"
	I0717 01:55:57.479142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.479330   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.482163   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482531   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.482579   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.482739   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.482937   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483111   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.483264   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.483454   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.483632   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.483648   71929 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-901761 && echo "old-k8s-version-901761" | sudo tee /etc/hostname
	I0717 01:55:57.613409   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-901761
	
	I0717 01:55:57.613440   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.616228   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616614   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.616655   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.616860   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.617040   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617222   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.617383   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.617574   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:57.617778   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:57.617794   71929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-901761' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-901761/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-901761' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:55:57.737648   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:55:57.737683   71929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:55:57.737703   71929 buildroot.go:174] setting up certificates
	I0717 01:55:57.737711   71929 provision.go:84] configureAuth start
	I0717 01:55:57.737721   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetMachineName
	I0717 01:55:57.738028   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:57.741089   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741532   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.741556   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.741741   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.744444   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.744917   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.744947   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.745111   71929 provision.go:143] copyHostCerts
	I0717 01:55:57.745185   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:55:57.745202   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:55:57.745273   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:55:57.745393   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:55:57.745405   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:55:57.745437   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:55:57.745517   71929 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:55:57.745527   71929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:55:57.745545   71929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:55:57.745602   71929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-901761 san=[127.0.0.1 192.168.50.44 localhost minikube old-k8s-version-901761]
	I0717 01:55:57.830872   71929 provision.go:177] copyRemoteCerts
	I0717 01:55:57.830939   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:55:57.830972   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:57.833463   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833741   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:57.833777   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:57.833887   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:57.834083   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:57.834250   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:57.834403   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:57.918346   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 01:55:57.954250   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:55:57.979770   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 01:55:58.005161   71929 provision.go:87] duration metric: took 267.436975ms to configureAuth
	I0717 01:55:58.005193   71929 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:55:58.005412   71929 config.go:182] Loaded profile config "old-k8s-version-901761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 01:55:58.005493   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.008255   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008626   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.008663   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.008833   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.009006   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009170   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.009298   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.009464   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.009616   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.009639   71929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:55:58.281081   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:55:58.281112   71929 machine.go:97] duration metric: took 913.933405ms to provisionDockerMachine
	I0717 01:55:58.281121   71929 start.go:293] postStartSetup for "old-k8s-version-901761" (driver="kvm2")
	I0717 01:55:58.281130   71929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:55:58.281144   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.281497   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:55:58.281533   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.284465   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.284812   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.284840   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.285023   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.285207   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.285441   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.285650   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.377149   71929 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:55:58.381709   71929 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:55:58.381731   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:55:58.381798   71929 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:55:58.381887   71929 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:55:58.381972   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:55:58.392916   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:58.420677   71929 start.go:296] duration metric: took 139.542186ms for postStartSetup
	I0717 01:55:58.420721   71929 fix.go:56] duration metric: took 18.181102939s for fixHost
	I0717 01:55:58.420745   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.423582   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.423961   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.423989   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.424169   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.424372   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424557   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.424693   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.424859   71929 main.go:141] libmachine: Using SSH client type: native
	I0717 01:55:58.425040   71929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.44 22 <nil> <nil>}
	I0717 01:55:58.425053   71929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:55:58.531563   71929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181358.508735025
	
	I0717 01:55:58.531585   71929 fix.go:216] guest clock: 1721181358.508735025
	I0717 01:55:58.531594   71929 fix.go:229] Guest: 2024-07-17 01:55:58.508735025 +0000 UTC Remote: 2024-07-17 01:55:58.420726806 +0000 UTC m=+251.057483904 (delta=88.008219ms)
	I0717 01:55:58.531617   71929 fix.go:200] guest clock delta is within tolerance: 88.008219ms
	I0717 01:55:58.531624   71929 start.go:83] releasing machines lock for "old-k8s-version-901761", held for 18.292040224s
	I0717 01:55:58.531655   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.531981   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:55:58.534476   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.534967   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.534996   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.535258   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535802   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.535990   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .DriverName
	I0717 01:55:58.536105   71929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:55:58.536183   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.536244   71929 ssh_runner.go:195] Run: cat /version.json
	I0717 01:55:58.536275   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHHostname
	I0717 01:55:58.539139   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539401   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539534   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539560   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539768   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.539815   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:55:58.539845   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:55:58.539968   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540000   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHPort
	I0717 01:55:58.540116   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540142   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHKeyPath
	I0717 01:55:58.540243   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.540332   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetSSHUsername
	I0717 01:55:58.540468   71929 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/old-k8s-version-901761/id_rsa Username:docker}
	I0717 01:55:58.628291   71929 ssh_runner.go:195] Run: systemctl --version
	I0717 01:55:58.656964   71929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:55:58.806516   71929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:55:58.815051   71929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:55:58.815113   71929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:55:58.838575   71929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:55:58.838596   71929 start.go:495] detecting cgroup driver to use...
	I0717 01:55:58.838662   71929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:55:58.855728   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:55:58.875221   71929 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:55:58.875285   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:55:58.889781   71929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:55:58.903832   71929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:55:59.026815   71929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:55:59.173879   71929 docker.go:233] disabling docker service ...
	I0717 01:55:59.173964   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:55:59.192906   71929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:55:59.208262   71929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:55:59.368178   71929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:55:59.500335   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:55:59.514795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:55:59.535553   71929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 01:55:59.535631   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.548304   71929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:55:59.548376   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.563066   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.578452   71929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:55:59.593447   71929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:55:59.606239   71929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:55:59.617051   71929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:55:59.617118   71929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:55:59.632601   71929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:55:59.645034   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:59.812343   71929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:55:59.969366   71929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:55:59.969444   71929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:55:59.974286   71929 start.go:563] Will wait 60s for crictl version
	I0717 01:55:59.974335   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:55:59.978280   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:00.020399   71929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:00.020489   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.049811   71929 ssh_runner.go:195] Run: crio --version
	I0717 01:56:00.081952   71929 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 01:55:55.703286   71603 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.07265838s)
	I0717 01:55:55.703312   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 01:55:55.703342   71603 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:55.703396   71603 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 01:55:56.651520   71603 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 01:55:56.651563   71603 cache_images.go:123] Successfully loaded all cached images
	I0717 01:55:56.651569   71603 cache_images.go:92] duration metric: took 14.83641531s to LoadCachedImages
	I0717 01:55:56.651581   71603 kubeadm.go:934] updating node { 192.168.61.174 8443 v1.31.0-beta.0 crio true true} ...
	I0717 01:55:56.651702   71603 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-391501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:55:56.651770   71603 ssh_runner.go:195] Run: crio config
	I0717 01:55:56.700129   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:55:56.700152   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:55:56.700162   71603 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:55:56.700189   71603 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-391501 NodeName:no-preload-391501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:55:56.700315   71603 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-391501"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:55:56.700372   71603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 01:55:56.711859   71603 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:55:56.711936   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:55:56.721994   71603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 01:55:56.738335   71603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 01:55:56.755198   71603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 01:55:56.772467   71603 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I0717 01:55:56.777580   71603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:55:56.792767   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:55:56.913075   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:55:56.930746   71603 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501 for IP: 192.168.61.174
	I0717 01:55:56.930768   71603 certs.go:194] generating shared ca certs ...
	I0717 01:55:56.930783   71603 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:55:56.930929   71603 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:55:56.930968   71603 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:55:56.930978   71603 certs.go:256] generating profile certs ...
	I0717 01:55:56.931050   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/client.key
	I0717 01:55:56.931112   71603 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key.a30174c9
	I0717 01:55:56.931153   71603 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key
	I0717 01:55:56.931292   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:55:56.931331   71603 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:55:56.931344   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:55:56.931373   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:55:56.931404   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:55:56.931434   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:55:56.931478   71603 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:55:56.932180   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:55:56.971111   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:55:57.016791   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:55:57.049766   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:55:57.078139   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 01:55:57.109781   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 01:55:57.137912   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:55:57.165141   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/no-preload-391501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:55:57.190210   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:55:57.214366   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:55:57.239518   71603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:55:57.265505   71603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:55:57.283773   71603 ssh_runner.go:195] Run: openssl version
	I0717 01:55:57.289846   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:55:57.300434   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305370   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.305456   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:55:57.311765   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:55:57.322769   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:55:57.334122   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338774   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.338823   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:55:57.344721   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:55:57.356476   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:55:57.368672   71603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374055   71603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.374107   71603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:55:57.380256   71603 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:55:57.392428   71603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:55:57.397593   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:55:57.404378   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:55:57.411094   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:55:57.418536   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:55:57.425312   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:55:57.431841   71603 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:55:57.438615   71603 kubeadm.go:392] StartCluster: {Name:no-preload-391501 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-391501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:55:57.438696   71603 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:55:57.438782   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.482932   71603 cri.go:89] found id: ""
	I0717 01:55:57.482993   71603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:55:57.493813   71603 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:55:57.493832   71603 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:55:57.493872   71603 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:55:57.504757   71603 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:55:57.505655   71603 kubeconfig.go:125] found "no-preload-391501" server: "https://192.168.61.174:8443"
	I0717 01:55:57.507634   71603 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:55:57.517990   71603 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.174
	I0717 01:55:57.518025   71603 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:55:57.518038   71603 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:55:57.518090   71603 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:55:57.557504   71603 cri.go:89] found id: ""
	I0717 01:55:57.557588   71603 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:55:57.574074   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:55:57.583703   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:55:57.583724   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 01:55:57.583768   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:55:57.593924   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:55:57.593992   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:55:57.606945   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:55:57.616803   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:55:57.616847   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:55:57.627215   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.637121   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:55:57.637179   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:55:57.646291   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:55:57.655314   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:55:57.655372   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:55:57.666994   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:55:57.677582   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:57.798148   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.316598   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.518419797s)
	I0717 01:55:59.316629   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.581666   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.675003   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:55:59.748682   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:55:59.748771   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:55:56.781465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:55:59.280394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:00.083384   71929 main.go:141] libmachine: (old-k8s-version-901761) Calling .GetIP
	I0717 01:56:00.086085   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086454   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:84:01", ip: ""} in network mk-old-k8s-version-901761: {Iface:virbr1 ExpiryTime:2024-07-17 02:55:51 +0000 UTC Type:0 Mac:52:54:00:8f:84:01 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:old-k8s-version-901761 Clientid:01:52:54:00:8f:84:01}
	I0717 01:56:00.086494   71929 main.go:141] libmachine: (old-k8s-version-901761) DBG | domain old-k8s-version-901761 has defined IP address 192.168.50.44 and MAC address 52:54:00:8f:84:01 in network mk-old-k8s-version-901761
	I0717 01:56:00.086710   71929 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:00.091322   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:00.104102   71929 kubeadm.go:883] updating cluster {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:00.104237   71929 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 01:56:00.104309   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:00.152445   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:00.152537   71929 ssh_runner.go:195] Run: which lz4
	I0717 01:56:00.156760   71929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:00.161123   71929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:00.161149   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 01:56:02.031804   71929 crio.go:462] duration metric: took 1.875087246s to copy over tarball
	I0717 01:56:02.031904   71929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:55:58.556014   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Start
	I0717 01:55:58.556171   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring networks are active...
	I0717 01:55:58.556866   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network default is active
	I0717 01:55:58.557237   71146 main.go:141] libmachine: (embed-certs-940222) Ensuring network mk-embed-certs-940222 is active
	I0717 01:55:58.557686   71146 main.go:141] libmachine: (embed-certs-940222) Getting domain xml...
	I0717 01:55:58.558375   71146 main.go:141] libmachine: (embed-certs-940222) Creating domain...
	I0717 01:55:59.917419   71146 main.go:141] libmachine: (embed-certs-940222) Waiting to get IP...
	I0717 01:55:59.918379   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:55:59.918849   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:55:59.918908   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:55:59.918833   73097 retry.go:31] will retry after 248.560075ms: waiting for machine to come up
	I0717 01:56:00.169337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.169877   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.169898   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.169837   73097 retry.go:31] will retry after 380.159418ms: waiting for machine to come up
	I0717 01:56:00.551472   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.552033   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.552076   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.551987   73097 retry.go:31] will retry after 439.990107ms: waiting for machine to come up
	I0717 01:56:00.993776   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:00.994337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:00.994351   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:00.994319   73097 retry.go:31] will retry after 415.462036ms: waiting for machine to come up
	I0717 01:56:01.412114   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:01.412508   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:01.412535   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:01.412484   73097 retry.go:31] will retry after 660.852153ms: waiting for machine to come up
	I0717 01:56:02.075095   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.075519   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.075541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.075498   73097 retry.go:31] will retry after 788.200532ms: waiting for machine to come up
	I0717 01:56:00.249300   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.749610   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:00.823943   71603 api_server.go:72] duration metric: took 1.075254107s to wait for apiserver process to appear ...
	I0717 01:56:00.823980   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:00.824006   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:00.825286   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:01.325032   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:01.281044   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:03.281329   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:05.092637   71929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.060698331s)
	I0717 01:56:05.092674   71929 crio.go:469] duration metric: took 3.060839356s to extract the tarball
	I0717 01:56:05.092682   71929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:05.135461   71929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:05.170789   71929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 01:56:05.170814   71929 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 01:56:05.170853   71929 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.170884   71929 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.170908   71929 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.170961   71929 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 01:56:05.171077   71929 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.171126   71929 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.171138   71929 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.171462   71929 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172182   71929 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 01:56:05.172224   71929 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:05.172296   71929 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.172362   71929 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.172415   71929 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.172449   71929 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.172251   71929 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.372794   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415131   71929 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 01:56:05.415181   71929 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.415231   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.419179   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 01:56:05.446530   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 01:56:05.452583   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 01:56:05.485692   71929 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 01:56:05.485734   71929 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 01:56:05.485780   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.486154   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.487346   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.489408   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.490486   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 01:56:05.494929   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.499420   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.593505   71929 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 01:56:05.593587   71929 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.593638   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.632564   71929 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 01:56:05.632615   71929 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.632667   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657745   71929 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 01:56:05.657792   71929 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.657852   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.657863   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 01:56:05.657908   71929 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 01:56:05.657943   71929 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.657958   71929 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 01:56:05.657976   71929 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.657980   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658004   71929 ssh_runner.go:195] Run: which crictl
	I0717 01:56:05.658037   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 01:56:05.658077   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 01:56:05.671679   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 01:56:05.671708   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 01:56:05.736572   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 01:56:05.736599   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 01:56:05.736671   71929 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 01:56:05.758178   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 01:56:05.758210   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 01:56:05.787948   71929 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 01:56:06.882199   71929 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:07.025117   71929 cache_images.go:92] duration metric: took 1.854284265s to LoadCachedImages
	W0717 01:56:07.025227   71929 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19264-3908/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0717 01:56:07.025245   71929 kubeadm.go:934] updating node { 192.168.50.44 8443 v1.20.0 crio true true} ...
	I0717 01:56:07.025378   71929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-901761 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:07.025465   71929 ssh_runner.go:195] Run: crio config
	I0717 01:56:07.081517   71929 cni.go:84] Creating CNI manager for ""
	I0717 01:56:07.081543   71929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:07.081560   71929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:07.081584   71929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.44 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901761 NodeName:old-k8s-version-901761 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 01:56:07.081749   71929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-901761"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:07.081833   71929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 01:56:07.092233   71929 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:07.092335   71929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:07.102086   71929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0717 01:56:07.121538   71929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:07.139112   71929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0717 01:56:07.157397   71929 ssh_runner.go:195] Run: grep 192.168.50.44	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:07.161818   71929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.44	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:07.174723   71929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:07.307484   71929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:07.325948   71929 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761 for IP: 192.168.50.44
	I0717 01:56:07.325974   71929 certs.go:194] generating shared ca certs ...
	I0717 01:56:07.326002   71929 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.326164   71929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:07.326216   71929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:07.326229   71929 certs.go:256] generating profile certs ...
	I0717 01:56:07.326351   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/client.key
	I0717 01:56:07.326416   71929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key.f41162e5
	I0717 01:56:07.326461   71929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key
	I0717 01:56:07.326630   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:07.326668   71929 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:07.326681   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:07.326700   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:07.326724   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:07.326767   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:07.326828   71929 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:07.327702   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:07.377671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:02.864980   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:02.865620   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:02.865656   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:02.865503   73097 retry.go:31] will retry after 1.00461953s: waiting for machine to come up
	I0717 01:56:03.871702   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:03.872187   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:03.872215   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:03.872133   73097 retry.go:31] will retry after 1.15731846s: waiting for machine to come up
	I0717 01:56:05.030767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:05.031263   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:05.031285   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:05.031209   73097 retry.go:31] will retry after 1.704165162s: waiting for machine to come up
	I0717 01:56:06.737975   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:06.738337   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:06.738386   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:06.738307   73097 retry.go:31] will retry after 2.014062128s: waiting for machine to come up
	I0717 01:56:06.326066   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:06.326112   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:05.780615   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:08.281127   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:07.413171   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:07.443671   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:07.482883   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 01:56:07.527280   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:07.571200   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:07.612296   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/old-k8s-version-901761/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:07.638012   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:07.662018   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:07.688033   71929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:07.721827   71929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:07.741517   71929 ssh_runner.go:195] Run: openssl version
	I0717 01:56:07.747466   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:07.758615   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763382   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.763439   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:07.769358   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:07.781802   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:07.792763   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797629   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.797681   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:07.803879   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:07.815479   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:07.828292   71929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832769   71929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.832829   71929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:07.838958   71929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:07.850108   71929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:07.854758   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:07.860661   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:07.866484   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:07.872302   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:07.878252   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:07.884275   71929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:07.890148   71929 kubeadm.go:392] StartCluster: {Name:old-k8s-version-901761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-901761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:07.890264   71929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:07.890343   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:07.930081   71929 cri.go:89] found id: ""
	I0717 01:56:07.930153   71929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:07.941371   71929 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:07.941396   71929 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:07.941445   71929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:07.955229   71929 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:07.957263   71929 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-901761" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:07.959002   71929 kubeconfig.go:62] /home/jenkins/minikube-integration/19264-3908/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-901761" cluster setting kubeconfig missing "old-k8s-version-901761" context setting]
	I0717 01:56:07.960384   71929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:07.962748   71929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:07.973815   71929 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.44
	I0717 01:56:07.973851   71929 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:07.973864   71929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:07.973933   71929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:08.020169   71929 cri.go:89] found id: ""
	I0717 01:56:08.020247   71929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:08.038015   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:08.049272   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:08.049294   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:08.049336   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:08.058953   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:08.059025   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:08.069034   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:08.078748   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:08.078817   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:08.089660   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.099521   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:08.099583   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:08.109831   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:08.120340   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:08.120400   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:08.130884   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:08.141008   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:08.275189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.006841   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.255401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.376659   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:09.475840   71929 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:09.475937   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:09.976926   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.476192   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:10.976705   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.476386   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:11.976459   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:08.753835   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:08.754316   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:08.754347   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:08.754264   73097 retry.go:31] will retry after 2.005810517s: waiting for machine to come up
	I0717 01:56:10.761600   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:10.762022   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:10.762053   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:10.761980   73097 retry.go:31] will retry after 2.631438855s: waiting for machine to come up
	I0717 01:56:11.327297   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:11.327348   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:10.779534   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:13.278417   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:15.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:12.476819   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:12.976633   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.476076   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.476885   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:14.976972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.476823   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:15.976917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.476765   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:16.976609   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:13.395592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:13.395949   71146 main.go:141] libmachine: (embed-certs-940222) DBG | unable to find current IP address of domain embed-certs-940222 in network mk-embed-certs-940222
	I0717 01:56:13.395991   71146 main.go:141] libmachine: (embed-certs-940222) DBG | I0717 01:56:13.395905   73097 retry.go:31] will retry after 3.565162998s: waiting for machine to come up
	I0717 01:56:16.964948   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965424   71146 main.go:141] libmachine: (embed-certs-940222) Found IP for machine: 192.168.72.225
	I0717 01:56:16.965455   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has current primary IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.965465   71146 main.go:141] libmachine: (embed-certs-940222) Reserving static IP address...
	I0717 01:56:16.966065   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.966092   71146 main.go:141] libmachine: (embed-certs-940222) DBG | skip adding static IP to network mk-embed-certs-940222 - found existing host DHCP lease matching {name: "embed-certs-940222", mac: "52:54:00:78:d5:92", ip: "192.168.72.225"}
	I0717 01:56:16.966107   71146 main.go:141] libmachine: (embed-certs-940222) Reserved static IP address: 192.168.72.225
	I0717 01:56:16.966122   71146 main.go:141] libmachine: (embed-certs-940222) Waiting for SSH to be available...
	I0717 01:56:16.966150   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Getting to WaitForSSH function...
	I0717 01:56:16.968287   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968642   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:16.968688   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:16.968758   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH client type: external
	I0717 01:56:16.968782   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa (-rw-------)
	I0717 01:56:16.968842   71146 main.go:141] libmachine: (embed-certs-940222) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 01:56:16.968872   71146 main.go:141] libmachine: (embed-certs-940222) DBG | About to run SSH command:
	I0717 01:56:16.968888   71146 main.go:141] libmachine: (embed-certs-940222) DBG | exit 0
	I0717 01:56:17.090641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | SSH cmd err, output: <nil>: 
	I0717 01:56:17.091120   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetConfigRaw
	I0717 01:56:17.091720   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.094205   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094541   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.094592   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.094810   71146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/config.json ...
	I0717 01:56:17.095001   71146 machine.go:94] provisionDockerMachine start ...
	I0717 01:56:17.095022   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:17.095223   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.097395   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097680   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.097707   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.097848   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.098021   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098170   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.098311   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.098491   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.098683   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.098695   71146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 01:56:17.203054   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 01:56:17.203080   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203364   71146 buildroot.go:166] provisioning hostname "embed-certs-940222"
	I0717 01:56:17.203402   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.203575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.206404   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.206826   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.206868   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.207076   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.207282   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207471   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.207611   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.207793   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.207985   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.207997   71146 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-940222 && echo "embed-certs-940222" | sudo tee /etc/hostname
	I0717 01:56:17.326485   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-940222
	
	I0717 01:56:17.326512   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.329226   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329629   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.329659   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.329834   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.329996   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330148   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.330265   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.330417   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.330619   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.330642   71146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-940222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-940222/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-940222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 01:56:17.439258   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 01:56:17.439285   71146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 01:56:17.439315   71146 buildroot.go:174] setting up certificates
	I0717 01:56:17.439324   71146 provision.go:84] configureAuth start
	I0717 01:56:17.439332   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetMachineName
	I0717 01:56:17.439656   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:17.442348   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442765   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.442796   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.442976   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.445418   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.445767   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.445803   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.446000   71146 provision.go:143] copyHostCerts
	I0717 01:56:17.446081   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 01:56:17.446098   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 01:56:17.446171   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 01:56:17.446265   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 01:56:17.446272   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 01:56:17.446292   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 01:56:17.446346   71146 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 01:56:17.446353   71146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 01:56:17.446370   71146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 01:56:17.446418   71146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.embed-certs-940222 san=[127.0.0.1 192.168.72.225 embed-certs-940222 localhost minikube]
	I0717 01:56:17.578140   71146 provision.go:177] copyRemoteCerts
	I0717 01:56:17.578195   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 01:56:17.578221   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.581141   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581432   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.581457   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.581697   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.581892   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.582038   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.582219   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:17.664867   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 01:56:17.691053   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 01:56:17.715816   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 01:56:17.742153   71146 provision.go:87] duration metric: took 302.817653ms to configureAuth
	I0717 01:56:17.742180   71146 buildroot.go:189] setting minikube options for container-runtime
	I0717 01:56:17.742405   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:17.742486   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:17.745102   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745369   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:17.745398   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:17.745608   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:17.745820   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746019   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:17.746209   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:17.746510   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:17.746738   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:17.746761   71146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 01:56:18.017395   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 01:56:18.017420   71146 machine.go:97] duration metric: took 922.405002ms to provisionDockerMachine
	I0717 01:56:18.017433   71146 start.go:293] postStartSetup for "embed-certs-940222" (driver="kvm2")
	I0717 01:56:18.017449   71146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 01:56:18.017469   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.017817   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 01:56:18.017846   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.020599   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021051   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.021081   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.021228   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.021410   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.021556   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.021660   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.101432   71146 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 01:56:18.105722   71146 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 01:56:18.105742   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/addons for local assets ...
	I0717 01:56:18.105797   71146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19264-3908/.minikube/files for local assets ...
	I0717 01:56:18.105866   71146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem -> 112592.pem in /etc/ssl/certs
	I0717 01:56:18.105944   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 01:56:18.115228   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:18.139857   71146 start.go:296] duration metric: took 122.411322ms for postStartSetup
	I0717 01:56:18.139924   71146 fix.go:56] duration metric: took 19.608111597s for fixHost
	I0717 01:56:18.139951   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.142466   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.142865   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.142886   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.143098   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.143262   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143444   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.143662   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.143852   71146 main.go:141] libmachine: Using SSH client type: native
	I0717 01:56:18.144022   71146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.225 22 <nil> <nil>}
	I0717 01:56:18.144033   71146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 01:56:18.243604   71146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721181378.218663213
	
	I0717 01:56:18.243635   71146 fix.go:216] guest clock: 1721181378.218663213
	I0717 01:56:18.243644   71146 fix.go:229] Guest: 2024-07-17 01:56:18.218663213 +0000 UTC Remote: 2024-07-17 01:56:18.139933424 +0000 UTC m=+355.354069584 (delta=78.729789ms)
	I0717 01:56:18.243662   71146 fix.go:200] guest clock delta is within tolerance: 78.729789ms
	I0717 01:56:18.243667   71146 start.go:83] releasing machines lock for "embed-certs-940222", held for 19.711916707s
	I0717 01:56:18.243684   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.243952   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:18.246454   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.246881   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.246907   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.247135   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247618   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247828   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:18.247919   71146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 01:56:18.247958   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.248050   71146 ssh_runner.go:195] Run: cat /version.json
	I0717 01:56:18.248074   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:18.250520   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250914   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.250952   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.250973   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251222   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251403   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251463   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:18.251495   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:18.251575   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.251668   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:18.251747   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.251817   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:18.251975   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:18.252103   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:18.351600   71146 ssh_runner.go:195] Run: systemctl --version
	I0717 01:56:18.357586   71146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 01:56:18.503767   71146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 01:56:18.511637   71146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 01:56:18.511724   71146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 01:56:18.530209   71146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 01:56:18.530235   71146 start.go:495] detecting cgroup driver to use...
	I0717 01:56:18.530303   71146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 01:56:18.551740   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 01:56:18.566975   71146 docker.go:217] disabling cri-docker service (if available) ...
	I0717 01:56:18.567044   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 01:56:18.585100   71146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 01:56:18.601151   71146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 01:56:18.735644   71146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 01:56:18.895436   71146 docker.go:233] disabling docker service ...
	I0717 01:56:18.895505   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 01:56:18.910354   71146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 01:56:18.922999   71146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 01:56:19.065365   71146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 01:56:19.179337   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 01:56:19.194454   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 01:56:19.213281   71146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 01:56:19.213339   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.223531   71146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 01:56:19.223594   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.233691   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.243695   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.255192   71146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 01:56:19.266082   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.276861   71146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.295903   71146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 01:56:19.306114   71146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 01:56:19.316226   71146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 01:56:19.316275   71146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 01:56:19.329402   71146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 01:56:19.340622   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:19.456624   71146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 01:56:19.605945   71146 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 01:56:19.606051   71146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 01:56:19.611067   71146 start.go:563] Will wait 60s for crictl version
	I0717 01:56:19.611116   71146 ssh_runner.go:195] Run: which crictl
	I0717 01:56:19.615065   71146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 01:56:19.662925   71146 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 01:56:19.662989   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.693240   71146 ssh_runner.go:195] Run: crio --version
	I0717 01:56:19.722332   71146 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 01:56:16.328318   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:16.328371   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:17.780821   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:19.780921   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:17.476562   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:17.976663   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.476958   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:18.976722   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.476641   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.976079   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.476899   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:20.976553   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.476087   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:21.976659   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:19.723930   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetIP
	I0717 01:56:19.726730   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727084   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:19.727107   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:19.727314   71146 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 01:56:19.731814   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:19.745514   71146 kubeadm.go:883] updating cluster {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 01:56:19.745622   71146 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 01:56:19.745677   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:19.782922   71146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 01:56:19.782988   71146 ssh_runner.go:195] Run: which lz4
	I0717 01:56:19.786946   71146 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 01:56:19.791298   71146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 01:56:19.791323   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 01:56:21.230910   71146 crio.go:462] duration metric: took 1.443984707s to copy over tarball
	I0717 01:56:21.231003   71146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 01:56:21.328607   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:21.328654   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.345118   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": read tcp 192.168.61.1:36190->192.168.61.174:8443: read: connection reset by peer
	I0717 01:56:21.824753   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:21.825500   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": dial tcp 192.168.61.174:8443: connect: connection refused
	I0717 01:56:22.325079   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:22.280465   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:24.779729   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:22.475994   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:22.976928   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.476906   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.975980   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.476208   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:24.976090   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.476425   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:25.976072   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.976180   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:23.517174   71146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.286133857s)
	I0717 01:56:23.517200   71146 crio.go:469] duration metric: took 2.286263798s to extract the tarball
	I0717 01:56:23.517210   71146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 01:56:23.554084   71146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 01:56:23.603831   71146 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 01:56:23.603861   71146 cache_images.go:84] Images are preloaded, skipping loading
	I0717 01:56:23.603871   71146 kubeadm.go:934] updating node { 192.168.72.225 8443 v1.30.2 crio true true} ...
	I0717 01:56:23.604004   71146 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-940222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 01:56:23.604087   71146 ssh_runner.go:195] Run: crio config
	I0717 01:56:23.658775   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:23.658794   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:23.658803   71146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 01:56:23.658826   71146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.225 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-940222 NodeName:embed-certs-940222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 01:56:23.659007   71146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-940222"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 01:56:23.659092   71146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 01:56:23.669971   71146 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 01:56:23.670042   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 01:56:23.680949   71146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0717 01:56:23.698917   71146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 01:56:23.716218   71146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 01:56:23.733971   71146 ssh_runner.go:195] Run: grep 192.168.72.225	control-plane.minikube.internal$ /etc/hosts
	I0717 01:56:23.738112   71146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 01:56:23.750915   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:23.894690   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:23.913418   71146 certs.go:68] Setting up /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222 for IP: 192.168.72.225
	I0717 01:56:23.913440   71146 certs.go:194] generating shared ca certs ...
	I0717 01:56:23.913456   71146 certs.go:226] acquiring lock for ca certs: {Name:mkc74cfd5a99c9dac635b1a438fee23ce449c2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:23.913630   71146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key
	I0717 01:56:23.913703   71146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key
	I0717 01:56:23.913729   71146 certs.go:256] generating profile certs ...
	I0717 01:56:23.913856   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/client.key
	I0717 01:56:23.913926   71146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key.d13a776d
	I0717 01:56:23.913968   71146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key
	I0717 01:56:23.914081   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem (1338 bytes)
	W0717 01:56:23.914123   71146 certs.go:480] ignoring /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259_empty.pem, impossibly tiny 0 bytes
	I0717 01:56:23.914134   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 01:56:23.914161   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem (1078 bytes)
	I0717 01:56:23.914188   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem (1123 bytes)
	I0717 01:56:23.914214   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem (1675 bytes)
	I0717 01:56:23.914256   71146 certs.go:484] found cert: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem (1708 bytes)
	I0717 01:56:23.914925   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 01:56:23.961346   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 01:56:24.006765   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 01:56:24.036852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 01:56:24.064984   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 01:56:24.090778   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 01:56:24.116146   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 01:56:24.142429   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/embed-certs-940222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 01:56:24.168427   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 01:56:24.193691   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/11259.pem --> /usr/share/ca-certificates/11259.pem (1338 bytes)
	I0717 01:56:24.218852   71146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/ssl/certs/112592.pem --> /usr/share/ca-certificates/112592.pem (1708 bytes)
	I0717 01:56:24.242932   71146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 01:56:24.261434   71146 ssh_runner.go:195] Run: openssl version
	I0717 01:56:24.267358   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11259.pem && ln -fs /usr/share/ca-certificates/11259.pem /etc/ssl/certs/11259.pem"
	I0717 01:56:24.280319   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285286   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:37 /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.285358   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11259.pem
	I0717 01:56:24.291896   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11259.pem /etc/ssl/certs/51391683.0"
	I0717 01:56:24.304027   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112592.pem && ln -fs /usr/share/ca-certificates/112592.pem /etc/ssl/certs/112592.pem"
	I0717 01:56:24.315542   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320212   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:37 /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.320283   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112592.pem
	I0717 01:56:24.326123   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112592.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 01:56:24.339982   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 01:56:24.352301   71146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357023   71146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.357078   71146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 01:56:24.363112   71146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 01:56:24.375910   71146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 01:56:24.380986   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 01:56:24.387276   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 01:56:24.393718   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 01:56:24.400367   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 01:56:24.406600   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 01:56:24.413161   71146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 01:56:24.420455   71146 kubeadm.go:392] StartCluster: {Name:embed-certs-940222 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-940222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 01:56:24.420578   71146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 01:56:24.420643   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.460702   71146 cri.go:89] found id: ""
	I0717 01:56:24.460792   71146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 01:56:24.472047   71146 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 01:56:24.472064   71146 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 01:56:24.472105   71146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 01:56:24.483092   71146 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:56:24.484146   71146 kubeconfig.go:125] found "embed-certs-940222" server: "https://192.168.72.225:8443"
	I0717 01:56:24.486112   71146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 01:56:24.497462   71146 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.225
	I0717 01:56:24.497496   71146 kubeadm.go:1160] stopping kube-system containers ...
	I0717 01:56:24.497511   71146 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 01:56:24.497571   71146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 01:56:24.541423   71146 cri.go:89] found id: ""
	I0717 01:56:24.541486   71146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 01:56:24.563272   71146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 01:56:24.574859   71146 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 01:56:24.574883   71146 kubeadm.go:157] found existing configuration files:
	
	I0717 01:56:24.574930   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 01:56:24.584960   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 01:56:24.585022   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 01:56:24.595950   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 01:56:24.605686   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 01:56:24.605775   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 01:56:24.616191   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.625954   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 01:56:24.626009   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 01:56:24.636254   71146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 01:56:24.648853   71146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 01:56:24.648961   71146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 01:56:24.660491   71146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 01:56:24.675329   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:24.795437   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:25.895383   71146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.099913319s)
	I0717 01:56:25.895411   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.116274   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.286149   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:26.355208   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:56:26.355296   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:26.855578   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.355880   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.371616   71146 api_server.go:72] duration metric: took 1.016410291s to wait for apiserver process to appear ...
	I0717 01:56:27.371642   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:56:27.371671   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:27.325875   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:27.325920   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:26.780264   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.279376   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:29.836783   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.836811   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.836823   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.883657   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.883684   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:29.883695   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:29.895244   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:29.895270   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:30.371799   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.375903   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.375926   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:30.872627   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:30.876799   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:30.876830   71146 api_server.go:103] status: https://192.168.72.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:31.372402   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 01:56:31.376723   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 01:56:31.382638   71146 api_server.go:141] control plane version: v1.30.2
	I0717 01:56:31.382663   71146 api_server.go:131] duration metric: took 4.011014381s to wait for apiserver health ...
	I0717 01:56:31.382672   71146 cni.go:84] Creating CNI manager for ""
	I0717 01:56:31.382679   71146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:31.384436   71146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:27.476313   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:27.976700   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.476585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:28.976008   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.477040   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:29.976892   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.476912   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:30.976626   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.476786   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.976148   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:31.385974   71146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:31.396977   71146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:31.415740   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:31.425268   71146 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:31.425306   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:31.425313   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 01:56:31.425320   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 01:56:31.425328   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 01:56:31.425332   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 01:56:31.425337   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 01:56:31.425344   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:31.425350   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 01:56:31.425360   71146 system_pods.go:74] duration metric: took 9.598959ms to wait for pod list to return data ...
	I0717 01:56:31.425368   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:31.429053   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:31.429075   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:31.429084   71146 node_conditions.go:105] duration metric: took 3.710466ms to run NodePressure ...
	I0717 01:56:31.429098   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:31.699456   71146 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703803   71146 kubeadm.go:739] kubelet initialised
	I0717 01:56:31.703825   71146 kubeadm.go:740] duration metric: took 4.345324ms waiting for restarted kubelet to initialise ...
	I0717 01:56:31.703835   71146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:31.708962   71146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.712850   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712871   71146 pod_ready.go:81] duration metric: took 3.888169ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.712879   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.712891   71146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.717134   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717156   71146 pod_ready.go:81] duration metric: took 4.256764ms for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.717163   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "etcd-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.717169   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.721479   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721498   71146 pod_ready.go:81] duration metric: took 4.321032ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.721508   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.721515   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:31.819188   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819217   71146 pod_ready.go:81] duration metric: took 97.692306ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:31.819226   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:31.819231   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.219730   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219766   71146 pod_ready.go:81] duration metric: took 400.526796ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.219775   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-proxy-l58xk" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.219782   71146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:32.619930   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619961   71146 pod_ready.go:81] duration metric: took 400.172543ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:32.619971   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:32.619978   71146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:33.019223   71146 pod_ready.go:97] node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019252   71146 pod_ready.go:81] duration metric: took 399.266573ms for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 01:56:33.019263   71146 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-940222" hosting pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:33.019271   71146 pod_ready.go:38] duration metric: took 1.315427432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:33.019291   71146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 01:56:33.032094   71146 ops.go:34] apiserver oom_adj: -16
	I0717 01:56:33.032116   71146 kubeadm.go:597] duration metric: took 8.56004698s to restartPrimaryControlPlane
	I0717 01:56:33.032125   71146 kubeadm.go:394] duration metric: took 8.611681052s to StartCluster
	I0717 01:56:33.032140   71146 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.032204   71146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:56:33.033963   71146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 01:56:33.034198   71146 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.225 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 01:56:33.034337   71146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 01:56:33.034405   71146 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-940222"
	I0717 01:56:33.034425   71146 addons.go:69] Setting metrics-server=true in profile "embed-certs-940222"
	I0717 01:56:33.034467   71146 addons.go:234] Setting addon metrics-server=true in "embed-certs-940222"
	W0717 01:56:33.034481   71146 addons.go:243] addon metrics-server should already be in state true
	I0717 01:56:33.034516   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034465   71146 addons.go:69] Setting default-storageclass=true in profile "embed-certs-940222"
	I0717 01:56:33.034469   71146 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-940222"
	I0717 01:56:33.034589   71146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-940222"
	W0717 01:56:33.034632   71146 addons.go:243] addon storage-provisioner should already be in state true
	I0717 01:56:33.034411   71146 config.go:182] Loaded profile config "embed-certs-940222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:56:33.034725   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.034963   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.034992   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035052   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035093   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.035199   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.035237   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.036051   71146 out.go:177] * Verifying Kubernetes components...
	I0717 01:56:33.037606   71146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 01:56:33.051343   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46137
	I0717 01:56:33.051970   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.052483   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.052516   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.052671   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41003
	I0717 01:56:33.052887   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.053016   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.053397   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.053443   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.053760   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.053775   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0717 01:56:33.053779   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054125   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.054139   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.054336   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.054625   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.054656   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.054984   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.055524   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.055563   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.057648   71146 addons.go:234] Setting addon default-storageclass=true in "embed-certs-940222"
	W0717 01:56:33.057668   71146 addons.go:243] addon default-storageclass should already be in state true
	I0717 01:56:33.057699   71146 host.go:66] Checking if "embed-certs-940222" exists ...
	I0717 01:56:33.058003   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.058036   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.070476   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0717 01:56:33.070717   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0717 01:56:33.071094   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071289   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.071648   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071665   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.071841   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.071863   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.072171   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072293   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.072357   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.072581   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.073298   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0717 01:56:33.073745   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.074224   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.074237   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.074585   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.074690   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.075032   71146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:56:33.075054   71146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:56:33.075361   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.077495   71146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 01:56:33.077496   71146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 01:56:33.079446   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 01:56:33.079460   71146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 01:56:33.079480   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.080373   71146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.080386   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 01:56:33.080401   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.083272   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083527   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083623   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.083641   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.083899   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084099   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084168   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.084184   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.084273   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.084331   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.084463   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.084748   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.084890   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.085028   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.092382   71146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0717 01:56:33.092826   71146 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:56:33.093401   71146 main.go:141] libmachine: Using API Version  1
	I0717 01:56:33.093418   71146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:56:33.094409   71146 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:56:33.094576   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetState
	I0717 01:56:33.096442   71146 main.go:141] libmachine: (embed-certs-940222) Calling .DriverName
	I0717 01:56:33.096730   71146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.096750   71146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 01:56:33.096768   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHHostname
	I0717 01:56:33.099802   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100290   71146 main.go:141] libmachine: (embed-certs-940222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:d5:92", ip: ""} in network mk-embed-certs-940222: {Iface:virbr4 ExpiryTime:2024-07-17 02:56:10 +0000 UTC Type:0 Mac:52:54:00:78:d5:92 Iaid: IPaddr:192.168.72.225 Prefix:24 Hostname:embed-certs-940222 Clientid:01:52:54:00:78:d5:92}
	I0717 01:56:33.100368   71146 main.go:141] libmachine: (embed-certs-940222) DBG | domain embed-certs-940222 has defined IP address 192.168.72.225 and MAC address 52:54:00:78:d5:92 in network mk-embed-certs-940222
	I0717 01:56:33.100472   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHPort
	I0717 01:56:33.100625   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHKeyPath
	I0717 01:56:33.100760   71146 main.go:141] libmachine: (embed-certs-940222) Calling .GetSSHUsername
	I0717 01:56:33.100849   71146 sshutil.go:53] new ssh client: &{IP:192.168.72.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/embed-certs-940222/id_rsa Username:docker}
	I0717 01:56:33.229494   71146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 01:56:33.246459   71146 node_ready.go:35] waiting up to 6m0s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:33.400804   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 01:56:33.400824   71146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 01:56:33.411866   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 01:56:33.413220   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 01:56:33.426485   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 01:56:33.426506   71146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 01:56:33.476707   71146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:33.476729   71146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 01:56:33.539095   71146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 01:56:34.542027   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.130125192s)
	I0717 01:56:34.542089   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542102   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542103   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.128853338s)
	I0717 01:56:34.542139   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542151   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542420   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542442   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542442   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542447   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542450   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542468   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542474   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542483   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542505   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.542517   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.542711   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542727   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.542715   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.542835   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.542847   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.549135   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.549160   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.549405   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.549428   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616065   71146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076933862s)
	I0717 01:56:34.616127   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616142   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616429   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616479   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616489   71146 main.go:141] libmachine: (embed-certs-940222) DBG | Closing plugin on server side
	I0717 01:56:34.616499   71146 main.go:141] libmachine: Making call to close driver server
	I0717 01:56:34.616541   71146 main.go:141] libmachine: (embed-certs-940222) Calling .Close
	I0717 01:56:34.616784   71146 main.go:141] libmachine: Successfully made call to close driver server
	I0717 01:56:34.616800   71146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 01:56:34.616810   71146 addons.go:475] Verifying addon metrics-server=true in "embed-certs-940222"
	I0717 01:56:34.619698   71146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 01:56:32.326261   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:32.326310   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:31.779064   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:33.780671   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:32.475986   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:32.976812   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.476601   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:33.976667   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.476897   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.976610   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.476444   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:35.976859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.476092   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:36.976979   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:34.620987   71146 addons.go:510] duration metric: took 1.586659462s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 01:56:35.250360   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.251933   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:37.326685   71603 api_server.go:269] stopped: https://192.168.61.174:8443/healthz: Get "https://192.168.61.174:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 01:56:37.326726   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:39.977828   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:39.977860   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:39.977877   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.002499   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 01:56:40.002532   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 01:56:36.280516   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:38.779351   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:40.324290   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.329888   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.329914   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:40.824413   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:40.831375   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 01:56:40.831407   71603 api_server.go:103] status: https://192.168.61.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 01:56:41.324677   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 01:56:41.333259   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 01:56:41.341378   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 01:56:41.341426   71603 api_server.go:131] duration metric: took 40.517438405s to wait for apiserver health ...
	I0717 01:56:41.341438   71603 cni.go:84] Creating CNI manager for ""
	I0717 01:56:41.341447   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 01:56:41.343489   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 01:56:37.476813   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:37.976779   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.476554   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:38.976791   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.476946   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.976044   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.476526   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:40.976315   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.476688   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:41.976203   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:39.750483   71146 node_ready.go:53] node "embed-certs-940222" has status "Ready":"False"
	I0717 01:56:40.249907   71146 node_ready.go:49] node "embed-certs-940222" has status "Ready":"True"
	I0717 01:56:40.249934   71146 node_ready.go:38] duration metric: took 7.003442258s for node "embed-certs-940222" to be "Ready" ...
	I0717 01:56:40.249945   71146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:40.255811   71146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762773   71146 pod_ready.go:92] pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:40.762795   71146 pod_ready.go:81] duration metric: took 506.956885ms for pod "coredns-7db6d8ff4d-wcw97" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.762806   71146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:42.768945   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:41.344846   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 01:56:41.360339   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 01:56:41.385845   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:56:41.409812   71603 system_pods.go:59] 8 kube-system pods found
	I0717 01:56:41.409843   71603 system_pods.go:61] "coredns-5cfdc65f69-ztqz8" [7c9caec8-56b6-4faa-9410-0528f108696c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 01:56:41.409849   71603 system_pods.go:61] "etcd-no-preload-391501" [603f01a1-2b07-4d1d-be14-4da4a9f1e1b2] Running
	I0717 01:56:41.409854   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [7733c5b6-5e30-472b-920d-3849f2849f7b] Running
	I0717 01:56:41.409860   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [c1afab7e-9b46-4940-94ec-e62ebc10f406] Running
	I0717 01:56:41.409865   71603 system_pods.go:61] "kube-proxy-zbqhw" [26056c12-35cd-4a3e-b40a-1eca055bd1e2] Running
	I0717 01:56:41.409869   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [98f81994-9d2a-45b8-9719-90e181ee5d6f] Running
	I0717 01:56:41.409877   71603 system_pods.go:61] "metrics-server-78fcd8795b-g9x96" [86a6a2c3-ae04-486d-9751-0cc801f9fbfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:56:41.409887   71603 system_pods.go:61] "storage-provisioner" [8b938905-d8e1-4129-8426-5e31a05d38db] Running
	I0717 01:56:41.409895   71603 system_pods.go:74] duration metric: took 24.018074ms to wait for pod list to return data ...
	I0717 01:56:41.409906   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:56:41.418825   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:56:41.418856   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 01:56:41.418868   71603 node_conditions.go:105] duration metric: took 8.953821ms to run NodePressure ...
	I0717 01:56:41.418892   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 01:56:41.713730   71603 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 01:56:41.719162   71603 retry.go:31] will retry after 180.435127ms: kubelet not initialised
	I0717 01:56:41.906299   71603 retry.go:31] will retry after 320.946038ms: kubelet not initialised
	I0717 01:56:42.232875   71603 retry.go:31] will retry after 423.072333ms: kubelet not initialised
	I0717 01:56:42.661412   71603 retry.go:31] will retry after 1.138026932s: kubelet not initialised
	I0717 01:56:43.809525   71603 retry.go:31] will retry after 1.187704503s: kubelet not initialised
	I0717 01:56:45.009815   71603 kubeadm.go:739] kubelet initialised
	I0717 01:56:45.009839   71603 kubeadm.go:740] duration metric: took 3.296082732s waiting for restarted kubelet to initialise ...
	I0717 01:56:45.009850   71603 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:56:45.021149   71603 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:40.780159   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:43.279699   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:45.280407   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:42.476301   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:42.976939   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.477021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:43.976910   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.476766   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.976415   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.476987   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:45.976666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.476735   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:46.976643   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:44.770078   71146 pod_ready.go:102] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.269496   71146 pod_ready.go:92] pod "etcd-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.269524   71146 pod_ready.go:81] duration metric: took 6.506711113s for pod "etcd-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.269538   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277267   71146 pod_ready.go:92] pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.277294   71146 pod_ready.go:81] duration metric: took 7.747271ms for pod "kube-apiserver-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.277309   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286697   71146 pod_ready.go:92] pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.286715   71146 pod_ready.go:81] duration metric: took 9.397698ms for pod "kube-controller-manager-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.286723   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291876   71146 pod_ready.go:92] pod "kube-proxy-l58xk" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.291897   71146 pod_ready.go:81] duration metric: took 5.168432ms for pod "kube-proxy-l58xk" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.291905   71146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296201   71146 pod_ready.go:92] pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace has status "Ready":"True"
	I0717 01:56:47.296215   71146 pod_ready.go:81] duration metric: took 4.304055ms for pod "kube-scheduler-embed-certs-940222" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.296222   71146 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	I0717 01:56:47.027495   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:49.028127   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.779497   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:50.279065   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:47.476576   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:47.976502   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.476634   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:48.976299   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.476069   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.976086   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.476859   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:50.976441   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.476217   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:51.976585   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:49.303729   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.802778   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:51.029194   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:53.528363   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.778915   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:54.780173   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:52.476652   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:52.976136   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.476991   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:53.976168   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.976279   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.476176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:55.976049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.476464   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:56.976802   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:54.308491   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:56.802797   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:55.528547   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.533612   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.030406   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.278908   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:59.279393   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:56:57.476661   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:57.976021   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.477049   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.976940   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.476773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:59.976397   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.476591   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:00.976189   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.476917   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:01.976263   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:56:58.806045   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:00.807112   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.529203   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.028677   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:01.779903   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:03.780163   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:02.476048   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:02.976019   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.476604   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.976602   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.477004   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:04.976726   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.476934   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:05.975985   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.476331   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:06.976185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:03.302031   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.303601   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.803763   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.528021   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:09.528499   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:05.780204   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:08.279630   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:07.476887   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:07.975972   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.476034   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:08.976678   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:09.476927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:09.477010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:09.513328   71929 cri.go:89] found id: ""
	I0717 01:57:09.513352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.513361   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:09.513368   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:09.513418   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:09.551203   71929 cri.go:89] found id: ""
	I0717 01:57:09.551228   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.551237   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:09.551244   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:09.551308   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:09.585321   71929 cri.go:89] found id: ""
	I0717 01:57:09.585352   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.585363   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:09.585370   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:09.585427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:09.623977   71929 cri.go:89] found id: ""
	I0717 01:57:09.624004   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.624012   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:09.624019   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:09.624078   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:09.663338   71929 cri.go:89] found id: ""
	I0717 01:57:09.663367   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.663374   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:09.663380   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:09.663425   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:09.696381   71929 cri.go:89] found id: ""
	I0717 01:57:09.696412   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.696423   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:09.696436   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:09.696482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:09.735892   71929 cri.go:89] found id: ""
	I0717 01:57:09.735922   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.735932   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:09.735944   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:09.736006   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:09.775878   71929 cri.go:89] found id: ""
	I0717 01:57:09.775909   71929 logs.go:276] 0 containers: []
	W0717 01:57:09.775919   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:09.775929   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:09.775942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:09.830021   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:09.830057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:09.844753   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:09.844783   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:09.985140   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:09.985165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:09.985179   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:10.049946   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:10.049984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:10.310038   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.805565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:11.529122   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:14.028939   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:10.779935   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:13.278388   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:15.280027   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:12.592959   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:12.608385   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:12.608467   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:12.649900   71929 cri.go:89] found id: ""
	I0717 01:57:12.649931   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.649942   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:12.649950   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:12.650021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:12.684915   71929 cri.go:89] found id: ""
	I0717 01:57:12.684941   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.684948   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:12.684956   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:12.685010   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:12.727718   71929 cri.go:89] found id: ""
	I0717 01:57:12.727758   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.727766   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:12.727788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:12.727864   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:12.767212   71929 cri.go:89] found id: ""
	I0717 01:57:12.767236   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.767244   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:12.767249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:12.767295   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:12.806301   71929 cri.go:89] found id: ""
	I0717 01:57:12.806320   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.806327   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:12.806332   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:12.806405   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:12.843118   71929 cri.go:89] found id: ""
	I0717 01:57:12.843151   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.843162   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:12.843170   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:12.843245   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:12.876671   71929 cri.go:89] found id: ""
	I0717 01:57:12.876697   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.876707   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:12.876714   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:12.876790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:12.916201   71929 cri.go:89] found id: ""
	I0717 01:57:12.916226   71929 logs.go:276] 0 containers: []
	W0717 01:57:12.916232   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:12.916240   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:12.916250   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:12.970346   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:12.970385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:12.985029   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:12.985053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:13.068314   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:13.068340   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:13.068352   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:13.147862   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:13.147897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:15.703130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:15.717081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:15.717160   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:15.757513   71929 cri.go:89] found id: ""
	I0717 01:57:15.757538   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.757545   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:15.757552   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:15.757599   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:15.794185   71929 cri.go:89] found id: ""
	I0717 01:57:15.794218   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.794231   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:15.794238   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:15.794300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:15.830589   71929 cri.go:89] found id: ""
	I0717 01:57:15.830619   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.830628   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:15.830634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:15.830694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:15.869673   71929 cri.go:89] found id: ""
	I0717 01:57:15.869702   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.869713   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:15.869720   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:15.869782   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:15.909225   71929 cri.go:89] found id: ""
	I0717 01:57:15.909257   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.909267   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:15.909278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:15.909343   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:15.944389   71929 cri.go:89] found id: ""
	I0717 01:57:15.944417   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.944424   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:15.944430   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:15.944490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:15.982871   71929 cri.go:89] found id: ""
	I0717 01:57:15.982898   71929 logs.go:276] 0 containers: []
	W0717 01:57:15.982907   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:15.982915   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:15.982983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:16.025674   71929 cri.go:89] found id: ""
	I0717 01:57:16.025701   71929 logs.go:276] 0 containers: []
	W0717 01:57:16.025711   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:16.025721   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:16.025736   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:16.111608   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:16.111627   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:16.111638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:16.184650   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:16.184689   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:16.230647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:16.230693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:16.286675   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:16.286710   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:15.303141   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.304891   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:16.029794   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.529463   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:17.780034   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:20.279882   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:18.802487   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:18.817483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:18.817562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:18.861623   71929 cri.go:89] found id: ""
	I0717 01:57:18.861653   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.861664   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:18.861671   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:18.861733   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:18.901335   71929 cri.go:89] found id: ""
	I0717 01:57:18.901359   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.901367   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:18.901372   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:18.901427   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:18.936477   71929 cri.go:89] found id: ""
	I0717 01:57:18.936508   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.936518   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:18.936524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:18.936581   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:18.971056   71929 cri.go:89] found id: ""
	I0717 01:57:18.971087   71929 logs.go:276] 0 containers: []
	W0717 01:57:18.971098   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:18.971106   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:18.971157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:19.005399   71929 cri.go:89] found id: ""
	I0717 01:57:19.005431   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.005453   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:19.005460   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:19.005525   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:19.040218   71929 cri.go:89] found id: ""
	I0717 01:57:19.040242   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.040250   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:19.040257   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:19.040317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:19.073365   71929 cri.go:89] found id: ""
	I0717 01:57:19.073392   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.073402   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:19.073409   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:19.073471   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:19.108670   71929 cri.go:89] found id: ""
	I0717 01:57:19.108701   71929 logs.go:276] 0 containers: []
	W0717 01:57:19.108713   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:19.108725   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:19.108743   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:19.186077   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:19.186111   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.232181   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:19.232214   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:19.288713   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:19.288755   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:19.303089   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:19.303115   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:19.386372   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:21.886666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:21.900905   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:21.900966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:21.934955   71929 cri.go:89] found id: ""
	I0717 01:57:21.934979   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.934987   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:21.934993   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:21.935036   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:21.972180   71929 cri.go:89] found id: ""
	I0717 01:57:21.972203   71929 logs.go:276] 0 containers: []
	W0717 01:57:21.972211   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:21.972217   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:21.972271   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:22.010452   71929 cri.go:89] found id: ""
	I0717 01:57:22.010479   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.010487   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:22.010493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:22.010547   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:22.045824   71929 cri.go:89] found id: ""
	I0717 01:57:22.045888   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.045902   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:22.045911   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:22.045984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:22.084734   71929 cri.go:89] found id: ""
	I0717 01:57:22.084760   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.084769   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:22.084774   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:22.084842   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:22.119808   71929 cri.go:89] found id: ""
	I0717 01:57:22.119838   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.119846   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:22.119852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:22.119910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:22.157537   71929 cri.go:89] found id: ""
	I0717 01:57:22.157583   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.157610   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:22.157620   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:22.157687   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:22.196021   71929 cri.go:89] found id: ""
	I0717 01:57:22.196052   71929 logs.go:276] 0 containers: []
	W0717 01:57:22.196062   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:22.196079   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:22.196094   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:22.274350   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:22.274373   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:22.274386   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:22.364363   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:22.364401   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:19.803506   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.306698   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:21.028767   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:23.527943   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:24.529027   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.529064   71603 pod_ready.go:81] duration metric: took 39.50788355s for pod "coredns-5cfdc65f69-ztqz8" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.529078   71603 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534655   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.534680   71603 pod_ready.go:81] duration metric: took 5.594492ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.534691   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539602   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.539622   71603 pod_ready.go:81] duration metric: took 4.923891ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.539631   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544475   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.544516   71603 pod_ready.go:81] duration metric: took 4.862078ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.544532   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549173   71603 pod_ready.go:92] pod "kube-proxy-zbqhw" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.549193   71603 pod_ready.go:81] duration metric: took 4.653986ms for pod "kube-proxy-zbqhw" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.549203   71603 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925916   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 01:57:24.925944   71603 pod_ready.go:81] duration metric: took 376.73343ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:24.925959   71603 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	I0717 01:57:22.779802   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:25.280281   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:22.410052   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:22.410092   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:22.462289   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:22.462326   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.978560   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:24.992533   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:24.992601   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:25.027708   71929 cri.go:89] found id: ""
	I0717 01:57:25.027746   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.027754   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:25.027760   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:25.027809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:25.066946   71929 cri.go:89] found id: ""
	I0717 01:57:25.066974   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.066985   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:25.066992   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:25.067051   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:25.107209   71929 cri.go:89] found id: ""
	I0717 01:57:25.107238   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.107248   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:25.107254   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:25.107300   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:25.141548   71929 cri.go:89] found id: ""
	I0717 01:57:25.141577   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.141587   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:25.141594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:25.141652   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:25.175822   71929 cri.go:89] found id: ""
	I0717 01:57:25.175853   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.175861   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:25.175866   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:25.175917   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:25.215672   71929 cri.go:89] found id: ""
	I0717 01:57:25.215705   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.215718   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:25.215726   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:25.215786   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:25.260392   71929 cri.go:89] found id: ""
	I0717 01:57:25.260422   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.260434   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:25.260442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:25.260510   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:25.309953   71929 cri.go:89] found id: ""
	I0717 01:57:25.309981   71929 logs.go:276] 0 containers: []
	W0717 01:57:25.309990   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:25.309999   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:25.310013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:25.414204   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:25.414229   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:25.414244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:25.501849   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:25.501883   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:25.545129   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:25.545163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:25.599948   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:25.599984   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:24.803870   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.302993   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:26.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.932999   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:27.280455   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:29.778817   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:28.115776   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:28.129710   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:28.129776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:28.165380   71929 cri.go:89] found id: ""
	I0717 01:57:28.165409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.165419   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:28.165425   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:28.165473   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:28.199225   71929 cri.go:89] found id: ""
	I0717 01:57:28.199251   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.199259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:28.199264   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:28.199314   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:28.235564   71929 cri.go:89] found id: ""
	I0717 01:57:28.235585   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.235593   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:28.235598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:28.235649   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:28.270377   71929 cri.go:89] found id: ""
	I0717 01:57:28.270409   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.270427   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:28.270435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:28.270488   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:28.310132   71929 cri.go:89] found id: ""
	I0717 01:57:28.310156   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.310163   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:28.310168   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:28.310222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:28.347590   71929 cri.go:89] found id: ""
	I0717 01:57:28.347619   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.347630   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:28.347638   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:28.347696   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:28.387953   71929 cri.go:89] found id: ""
	I0717 01:57:28.387988   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.388001   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:28.388010   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:28.388072   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:28.428788   71929 cri.go:89] found id: ""
	I0717 01:57:28.428811   71929 logs.go:276] 0 containers: []
	W0717 01:57:28.428818   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:28.428826   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:28.428838   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:28.487411   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:28.487465   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:28.501121   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:28.501152   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:28.576296   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:28.576320   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:28.576335   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:28.660246   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:28.660288   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:31.201238   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:31.221132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:31.221192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:31.279839   71929 cri.go:89] found id: ""
	I0717 01:57:31.279867   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.279876   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:31.279884   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:31.279943   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:31.359764   71929 cri.go:89] found id: ""
	I0717 01:57:31.359796   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.359807   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:31.359814   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:31.359873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:31.397045   71929 cri.go:89] found id: ""
	I0717 01:57:31.397077   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.397087   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:31.397094   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:31.397157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:31.441356   71929 cri.go:89] found id: ""
	I0717 01:57:31.441388   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.441397   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:31.441404   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:31.441459   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:31.484014   71929 cri.go:89] found id: ""
	I0717 01:57:31.484040   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.484053   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:31.484060   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:31.484124   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:31.520686   71929 cri.go:89] found id: ""
	I0717 01:57:31.520714   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.520725   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:31.520733   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:31.520792   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:31.557300   71929 cri.go:89] found id: ""
	I0717 01:57:31.557326   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.557334   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:31.557339   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:31.557387   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:31.597753   71929 cri.go:89] found id: ""
	I0717 01:57:31.597782   71929 logs.go:276] 0 containers: []
	W0717 01:57:31.597792   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:31.597804   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:31.597818   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:31.656796   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:31.656837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:31.671287   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:31.671311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:31.742752   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:31.742772   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:31.742784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:31.828154   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:31.828186   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:29.303279   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.303332   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.434410   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.932319   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:31.778853   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:33.780535   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:34.368947   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:34.384323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:34.384402   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:34.421138   71929 cri.go:89] found id: ""
	I0717 01:57:34.421171   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.421182   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:34.421190   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:34.421263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:34.459077   71929 cri.go:89] found id: ""
	I0717 01:57:34.459105   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.459116   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:34.459123   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:34.459180   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:34.492987   71929 cri.go:89] found id: ""
	I0717 01:57:34.493016   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.493027   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:34.493038   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:34.493098   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:34.527801   71929 cri.go:89] found id: ""
	I0717 01:57:34.527827   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.527836   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:34.527841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:34.527890   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:34.562877   71929 cri.go:89] found id: ""
	I0717 01:57:34.562904   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.562914   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:34.562921   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:34.562981   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:34.599387   71929 cri.go:89] found id: ""
	I0717 01:57:34.599409   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.599417   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:34.599423   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:34.599479   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:34.636087   71929 cri.go:89] found id: ""
	I0717 01:57:34.636118   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.636126   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:34.636132   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:34.636194   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:34.673168   71929 cri.go:89] found id: ""
	I0717 01:57:34.673196   71929 logs.go:276] 0 containers: []
	W0717 01:57:34.673206   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:34.673214   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:34.673226   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:34.712833   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:34.712864   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:34.765926   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:34.765959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:34.780024   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:34.780049   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:34.863080   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:34.863106   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:34.863122   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:33.803621   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.306114   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:35.933050   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.432520   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:36.280143   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:38.779168   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:37.446644   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:37.463015   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:37.463090   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:37.499563   71929 cri.go:89] found id: ""
	I0717 01:57:37.499592   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.499601   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:37.499607   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:37.499663   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:37.538516   71929 cri.go:89] found id: ""
	I0717 01:57:37.538543   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.538572   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:37.538579   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:37.538638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:37.577032   71929 cri.go:89] found id: ""
	I0717 01:57:37.577061   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.577068   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:37.577074   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:37.577129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:37.613534   71929 cri.go:89] found id: ""
	I0717 01:57:37.613563   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.613574   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:37.613582   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:37.613646   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:37.651346   71929 cri.go:89] found id: ""
	I0717 01:57:37.651370   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.651381   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:37.651389   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:37.651451   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:37.685949   71929 cri.go:89] found id: ""
	I0717 01:57:37.685989   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.686001   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:37.686008   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:37.686068   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:37.721706   71929 cri.go:89] found id: ""
	I0717 01:57:37.721744   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.721752   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:37.721759   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:37.721812   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:37.758948   71929 cri.go:89] found id: ""
	I0717 01:57:37.758976   71929 logs.go:276] 0 containers: []
	W0717 01:57:37.758985   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:37.758994   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:37.759005   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:37.835305   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:37.835334   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:37.835349   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:37.916627   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:37.916660   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:37.956819   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:37.956851   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.007596   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:38.007641   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.522573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:40.536850   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:40.536924   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:40.576172   71929 cri.go:89] found id: ""
	I0717 01:57:40.576200   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.576211   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:40.576218   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:40.576277   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:40.611926   71929 cri.go:89] found id: ""
	I0717 01:57:40.611958   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.611969   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:40.611976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:40.612039   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:40.647225   71929 cri.go:89] found id: ""
	I0717 01:57:40.647251   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.647259   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:40.647265   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:40.647315   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:40.683871   71929 cri.go:89] found id: ""
	I0717 01:57:40.683902   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.683917   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:40.683925   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:40.683999   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:40.720941   71929 cri.go:89] found id: ""
	I0717 01:57:40.720971   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.720982   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:40.720989   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:40.721053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:40.756695   71929 cri.go:89] found id: ""
	I0717 01:57:40.756728   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.756739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:40.756746   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:40.756801   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:40.794181   71929 cri.go:89] found id: ""
	I0717 01:57:40.794214   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.794221   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:40.794226   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:40.794281   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:40.830361   71929 cri.go:89] found id: ""
	I0717 01:57:40.830396   71929 logs.go:276] 0 containers: []
	W0717 01:57:40.830407   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:40.830417   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:40.830436   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:40.844827   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:40.844849   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:40.913003   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:40.913021   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:40.913035   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:40.996314   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:40.996348   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:41.041120   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:41.041151   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:38.801850   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.802727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:42.802814   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.934130   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.432799   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:40.780350   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.279200   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.279971   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:43.593226   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:43.606395   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:43.606461   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:43.646260   71929 cri.go:89] found id: ""
	I0717 01:57:43.646290   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.646302   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:43.646310   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:43.646368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:43.681148   71929 cri.go:89] found id: ""
	I0717 01:57:43.681174   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.681182   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:43.681189   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:43.681250   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:43.716568   71929 cri.go:89] found id: ""
	I0717 01:57:43.716595   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.716606   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:43.716613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:43.716675   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:43.750507   71929 cri.go:89] found id: ""
	I0717 01:57:43.750536   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.750558   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:43.750566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:43.750627   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:43.787207   71929 cri.go:89] found id: ""
	I0717 01:57:43.787234   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.787244   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:43.787251   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:43.787311   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:43.822997   71929 cri.go:89] found id: ""
	I0717 01:57:43.823034   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.823045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:43.823052   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:43.823118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:43.860605   71929 cri.go:89] found id: ""
	I0717 01:57:43.860632   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.860640   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:43.860646   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:43.860702   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:43.897419   71929 cri.go:89] found id: ""
	I0717 01:57:43.897451   71929 logs.go:276] 0 containers: []
	W0717 01:57:43.897463   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:43.897473   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:43.897492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:43.956361   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:43.956393   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:43.971077   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:43.971104   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:44.045234   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:44.045258   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:44.045275   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:44.122508   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:44.122544   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:46.660516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:46.675555   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:46.675651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:46.709264   71929 cri.go:89] found id: ""
	I0717 01:57:46.709291   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.709300   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:46.709306   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:46.709362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:46.744865   71929 cri.go:89] found id: ""
	I0717 01:57:46.744898   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.744908   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:46.744915   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:46.744971   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:46.785837   71929 cri.go:89] found id: ""
	I0717 01:57:46.785860   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.785870   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:46.785878   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:46.785932   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:46.828801   71929 cri.go:89] found id: ""
	I0717 01:57:46.828832   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.828842   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:46.828849   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:46.828907   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:46.863122   71929 cri.go:89] found id: ""
	I0717 01:57:46.863151   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.863162   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:46.863175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:46.863232   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:46.900705   71929 cri.go:89] found id: ""
	I0717 01:57:46.900731   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.900739   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:46.900744   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:46.900790   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:46.935774   71929 cri.go:89] found id: ""
	I0717 01:57:46.935816   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.935829   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:46.935840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:46.935895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:46.969274   71929 cri.go:89] found id: ""
	I0717 01:57:46.969304   71929 logs.go:276] 0 containers: []
	W0717 01:57:46.969315   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:46.969325   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:46.969339   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:47.040318   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:47.040343   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:47.040358   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:47.119920   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:47.119954   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:47.168818   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:47.168847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:47.221983   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:47.222034   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:45.303812   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.304051   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:45.433020   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.932755   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.936075   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:47.780328   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.781850   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:49.736564   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:49.749966   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:49.750025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:49.788294   71929 cri.go:89] found id: ""
	I0717 01:57:49.788321   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.788332   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:49.788339   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:49.788396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:49.826406   71929 cri.go:89] found id: ""
	I0717 01:57:49.826431   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.826440   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:49.826445   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:49.826491   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:49.864978   71929 cri.go:89] found id: ""
	I0717 01:57:49.865005   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.865015   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:49.865020   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:49.865074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:49.901238   71929 cri.go:89] found id: ""
	I0717 01:57:49.901270   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.901281   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:49.901300   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:49.901366   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:49.937035   71929 cri.go:89] found id: ""
	I0717 01:57:49.937058   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.937065   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:49.937070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:49.937207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:49.977793   71929 cri.go:89] found id: ""
	I0717 01:57:49.977816   71929 logs.go:276] 0 containers: []
	W0717 01:57:49.977823   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:49.977828   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:49.977873   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:50.012915   71929 cri.go:89] found id: ""
	I0717 01:57:50.012942   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.012952   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:50.012959   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:50.013025   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:50.049085   71929 cri.go:89] found id: ""
	I0717 01:57:50.049115   71929 logs.go:276] 0 containers: []
	W0717 01:57:50.049127   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:50.049138   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:50.049156   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:50.087521   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:50.087549   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:50.140934   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:50.140978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:50.156001   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:50.156033   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:50.231780   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:50.231811   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:50.231835   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:49.802916   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:51.803036   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.432307   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.432384   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.278585   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:54.279641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:52.810064   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:52.823442   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:52.823508   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:52.860753   71929 cri.go:89] found id: ""
	I0717 01:57:52.860778   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.860789   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:52.860797   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:52.860852   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:52.896264   71929 cri.go:89] found id: ""
	I0717 01:57:52.896289   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.896297   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:52.896303   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:52.896349   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:52.932613   71929 cri.go:89] found id: ""
	I0717 01:57:52.932640   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.932649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:52.932657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:52.932722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:52.969691   71929 cri.go:89] found id: ""
	I0717 01:57:52.969720   71929 logs.go:276] 0 containers: []
	W0717 01:57:52.969728   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:52.969734   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:52.969788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:53.007039   71929 cri.go:89] found id: ""
	I0717 01:57:53.007067   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.007075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:53.007081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:53.007135   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:53.047736   71929 cri.go:89] found id: ""
	I0717 01:57:53.047762   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.047772   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:53.047778   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:53.047838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:53.083192   71929 cri.go:89] found id: ""
	I0717 01:57:53.083216   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.083225   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:53.083230   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:53.083276   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:53.118509   71929 cri.go:89] found id: ""
	I0717 01:57:53.118536   71929 logs.go:276] 0 containers: []
	W0717 01:57:53.118545   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:53.118564   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:53.118589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:53.203003   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:53.203039   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:53.244602   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:53.244627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:53.295180   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:53.295216   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:53.310777   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:53.310805   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:53.389412   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:55.890450   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:55.903768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:55.903843   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:55.944148   71929 cri.go:89] found id: ""
	I0717 01:57:55.944171   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.944179   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:55.944185   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:55.944231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:55.979945   71929 cri.go:89] found id: ""
	I0717 01:57:55.979970   71929 logs.go:276] 0 containers: []
	W0717 01:57:55.979980   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:55.979987   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:55.980045   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:56.019057   71929 cri.go:89] found id: ""
	I0717 01:57:56.019089   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.019100   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:56.019107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:56.019162   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:56.054343   71929 cri.go:89] found id: ""
	I0717 01:57:56.054369   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.054378   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:56.054383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:56.054434   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:56.091150   71929 cri.go:89] found id: ""
	I0717 01:57:56.091179   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.091189   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:56.091197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:56.091256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:56.127502   71929 cri.go:89] found id: ""
	I0717 01:57:56.127528   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.127538   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:56.127547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:56.127602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:56.167935   71929 cri.go:89] found id: ""
	I0717 01:57:56.167961   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.167972   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:56.167979   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:56.168048   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:56.209501   71929 cri.go:89] found id: ""
	I0717 01:57:56.209527   71929 logs.go:276] 0 containers: []
	W0717 01:57:56.209537   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:56.209547   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:56.209561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:56.257989   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:56.258023   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:56.272491   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:56.272519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:56.361622   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:56.361653   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:56.361668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:56.442953   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:56.442992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:57:54.302376   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.303297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.933123   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.933242   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:56.280399   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.779285   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:57:58.983914   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:57:58.997215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:57:58.997292   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:57:59.032937   71929 cri.go:89] found id: ""
	I0717 01:57:59.032964   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.032980   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:57:59.032996   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:57:59.033057   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:57:59.067790   71929 cri.go:89] found id: ""
	I0717 01:57:59.067811   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.067819   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:57:59.067825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:57:59.067881   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:57:59.107659   71929 cri.go:89] found id: ""
	I0717 01:57:59.107689   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.107699   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:57:59.107705   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:57:59.107754   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:57:59.150134   71929 cri.go:89] found id: ""
	I0717 01:57:59.150158   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.150168   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:57:59.150175   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:57:59.150235   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:57:59.192351   71929 cri.go:89] found id: ""
	I0717 01:57:59.192381   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.192391   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:57:59.192398   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:57:59.192460   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:57:59.228177   71929 cri.go:89] found id: ""
	I0717 01:57:59.228202   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.228209   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:57:59.228215   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:57:59.228261   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:57:59.267016   71929 cri.go:89] found id: ""
	I0717 01:57:59.267043   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.267052   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:57:59.267058   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:57:59.267109   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:57:59.302235   71929 cri.go:89] found id: ""
	I0717 01:57:59.302257   71929 logs.go:276] 0 containers: []
	W0717 01:57:59.302263   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:57:59.302273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:57:59.302285   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:59.368453   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:57:59.368492   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:57:59.383375   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:57:59.383399   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:57:59.454946   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:57:59.454975   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:57:59.454992   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:57:59.539576   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:57:59.539609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:02.085516   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:02.099848   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:02.099909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:02.136835   71929 cri.go:89] found id: ""
	I0717 01:58:02.136859   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.136867   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:02.136872   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:02.136928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:02.175304   71929 cri.go:89] found id: ""
	I0717 01:58:02.175331   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.175338   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:02.175344   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:02.175389   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:02.210922   71929 cri.go:89] found id: ""
	I0717 01:58:02.210947   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.210955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:02.210961   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:02.211018   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:02.246952   71929 cri.go:89] found id: ""
	I0717 01:58:02.246983   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.246992   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:02.246999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:02.247053   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:02.284857   71929 cri.go:89] found id: ""
	I0717 01:58:02.284883   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.284892   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:02.284897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:02.284944   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:02.322941   71929 cri.go:89] found id: ""
	I0717 01:58:02.322978   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.322999   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:02.323007   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:02.323065   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:02.357904   71929 cri.go:89] found id: ""
	I0717 01:58:02.357932   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.357943   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:02.357950   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:02.358012   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:02.392291   71929 cri.go:89] found id: ""
	I0717 01:58:02.392315   71929 logs.go:276] 0 containers: []
	W0717 01:58:02.392322   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:02.392331   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:02.392346   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:57:58.802622   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.303663   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:01.433212   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:03.433962   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:00.779479   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.779619   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.279590   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:02.447670   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:02.447704   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:02.462259   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:02.462284   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:02.534304   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:02.534332   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:02.534347   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:02.612757   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:02.612799   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.153573   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:05.166702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:05.166775   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:05.205213   71929 cri.go:89] found id: ""
	I0717 01:58:05.205238   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.205247   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:05.205252   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:05.205305   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:05.242021   71929 cri.go:89] found id: ""
	I0717 01:58:05.242048   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.242057   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:05.242063   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:05.242118   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:05.281862   71929 cri.go:89] found id: ""
	I0717 01:58:05.281889   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.281900   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:05.281908   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:05.281967   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:05.318125   71929 cri.go:89] found id: ""
	I0717 01:58:05.318157   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.318169   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:05.318177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:05.318244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:05.352470   71929 cri.go:89] found id: ""
	I0717 01:58:05.352504   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.352516   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:05.352524   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:05.352595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:05.386692   71929 cri.go:89] found id: ""
	I0717 01:58:05.386722   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.386733   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:05.386741   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:05.386803   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:05.426676   71929 cri.go:89] found id: ""
	I0717 01:58:05.426731   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.426744   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:05.426751   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:05.426811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:05.467974   71929 cri.go:89] found id: ""
	I0717 01:58:05.468000   71929 logs.go:276] 0 containers: []
	W0717 01:58:05.468010   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:05.468020   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:05.468036   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:05.506769   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:05.506797   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:05.561745   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:05.561782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:05.576743   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:05.576775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:05.652856   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:05.652887   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:05.652903   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:03.304109   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.803632   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:05.434411   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.931796   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.932902   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:07.779196   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:09.779591   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:08.244185   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:08.257343   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:08.257420   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:08.297136   71929 cri.go:89] found id: ""
	I0717 01:58:08.297163   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.297174   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:08.297181   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:08.297237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:08.336099   71929 cri.go:89] found id: ""
	I0717 01:58:08.336121   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.336129   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:08.336135   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:08.336185   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:08.369668   71929 cri.go:89] found id: ""
	I0717 01:58:08.369690   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.369698   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:08.369706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:08.369756   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:08.405140   71929 cri.go:89] found id: ""
	I0717 01:58:08.405171   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.405179   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:08.405186   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:08.405249   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:08.446296   71929 cri.go:89] found id: ""
	I0717 01:58:08.446319   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.446326   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:08.446331   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:08.446377   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:08.483004   71929 cri.go:89] found id: ""
	I0717 01:58:08.483042   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.483062   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:08.483070   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:08.483139   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:08.520668   71929 cri.go:89] found id: ""
	I0717 01:58:08.520699   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.520710   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:08.520717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:08.520776   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:08.554711   71929 cri.go:89] found id: ""
	I0717 01:58:08.554734   71929 logs.go:276] 0 containers: []
	W0717 01:58:08.554744   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:08.554752   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:08.554763   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:08.606972   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:08.607004   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:08.621102   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:08.621134   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:08.690424   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:08.690443   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:08.690454   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:08.775151   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:08.775193   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:11.318471   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:11.331875   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:11.331954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:11.375766   71929 cri.go:89] found id: ""
	I0717 01:58:11.375787   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.375795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:11.375801   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:11.375863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:11.417043   71929 cri.go:89] found id: ""
	I0717 01:58:11.417080   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.417103   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:11.417111   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:11.417169   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:11.459462   71929 cri.go:89] found id: ""
	I0717 01:58:11.459487   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.459495   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:11.459500   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:11.459551   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:11.516500   71929 cri.go:89] found id: ""
	I0717 01:58:11.516525   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.516533   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:11.516539   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:11.516590   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:11.573916   71929 cri.go:89] found id: ""
	I0717 01:58:11.573961   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.575159   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:11.575201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:11.575275   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:11.619446   71929 cri.go:89] found id: ""
	I0717 01:58:11.619477   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.619489   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:11.619497   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:11.619558   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:11.654766   71929 cri.go:89] found id: ""
	I0717 01:58:11.654793   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.654802   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:11.654807   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:11.654859   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:11.690306   71929 cri.go:89] found id: ""
	I0717 01:58:11.690335   71929 logs.go:276] 0 containers: []
	W0717 01:58:11.690346   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:11.690354   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:11.690366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:11.744470   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:11.744516   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:11.758824   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:11.758856   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:11.841028   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:11.841058   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:11.841076   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:11.923299   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:11.923351   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:08.303010   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:10.303678   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.803090   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:11.933148   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.433109   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:12.280292   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.281580   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:14.466666   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:14.479676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:14.479740   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:14.517890   71929 cri.go:89] found id: ""
	I0717 01:58:14.517919   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.517931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:14.517938   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:14.517998   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:14.552891   71929 cri.go:89] found id: ""
	I0717 01:58:14.552918   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.552926   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:14.552931   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:14.552992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:14.593571   71929 cri.go:89] found id: ""
	I0717 01:58:14.593596   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.593604   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:14.593609   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:14.593662   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:14.628869   71929 cri.go:89] found id: ""
	I0717 01:58:14.628897   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.628907   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:14.628913   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:14.628972   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:14.663558   71929 cri.go:89] found id: ""
	I0717 01:58:14.663586   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.663593   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:14.663599   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:14.663644   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:14.700788   71929 cri.go:89] found id: ""
	I0717 01:58:14.700824   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.700834   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:14.700843   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:14.700903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:14.737975   71929 cri.go:89] found id: ""
	I0717 01:58:14.738014   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.738025   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:14.738032   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:14.738091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:14.775419   71929 cri.go:89] found id: ""
	I0717 01:58:14.775443   71929 logs.go:276] 0 containers: []
	W0717 01:58:14.775453   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:14.775465   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:14.775479   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:14.817635   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:14.817661   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:14.870667   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:14.870705   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:14.885208   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:14.885235   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:14.962286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:14.962318   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:14.962334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:14.803624   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.303944   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.434108   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.934577   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:16.779538   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:18.780694   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:17.537546   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:17.550258   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:17.550322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:17.586251   71929 cri.go:89] found id: ""
	I0717 01:58:17.586278   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.586286   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:17.586292   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:17.586348   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:17.620903   71929 cri.go:89] found id: ""
	I0717 01:58:17.620927   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.620935   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:17.620941   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:17.620992   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:17.659292   71929 cri.go:89] found id: ""
	I0717 01:58:17.659319   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.659328   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:17.659334   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:17.659384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:17.695603   71929 cri.go:89] found id: ""
	I0717 01:58:17.695632   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.695642   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:17.695650   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:17.695711   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:17.731943   71929 cri.go:89] found id: ""
	I0717 01:58:17.731970   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.731978   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:17.731984   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:17.732041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:17.767257   71929 cri.go:89] found id: ""
	I0717 01:58:17.767284   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.767293   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:17.767299   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:17.767357   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:17.802455   71929 cri.go:89] found id: ""
	I0717 01:58:17.802495   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.802508   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:17.802516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:17.802602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:17.839321   71929 cri.go:89] found id: ""
	I0717 01:58:17.839351   71929 logs.go:276] 0 containers: []
	W0717 01:58:17.839362   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:17.839374   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:17.839391   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:17.912269   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:17.912295   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:17.912311   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:17.990005   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:17.990038   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:18.029933   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:18.029960   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:18.081941   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:18.081977   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:20.597325   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:20.611835   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:20.611901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:20.647899   71929 cri.go:89] found id: ""
	I0717 01:58:20.647922   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.647931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:20.647936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:20.647984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:20.683783   71929 cri.go:89] found id: ""
	I0717 01:58:20.683816   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.683827   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:20.683834   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:20.683892   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:20.721803   71929 cri.go:89] found id: ""
	I0717 01:58:20.721833   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.721844   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:20.721851   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:20.721910   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:20.756148   71929 cri.go:89] found id: ""
	I0717 01:58:20.756177   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.756189   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:20.756196   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:20.756259   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:20.795976   71929 cri.go:89] found id: ""
	I0717 01:58:20.796014   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.796028   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:20.796036   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:20.796095   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:20.833775   71929 cri.go:89] found id: ""
	I0717 01:58:20.833805   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.833816   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:20.833824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:20.833891   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:20.869138   71929 cri.go:89] found id: ""
	I0717 01:58:20.869163   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.869173   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:20.869180   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:20.869237   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:20.904865   71929 cri.go:89] found id: ""
	I0717 01:58:20.904893   71929 logs.go:276] 0 containers: []
	W0717 01:58:20.904901   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:20.904910   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:20.904920   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:20.947268   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:20.947294   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:20.998541   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:20.998582   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:21.013797   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:21.013828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:21.085101   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:21.085127   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:21.085141   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:19.804949   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:22.304273   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.436176   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.933548   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:21.279177   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.279599   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:25.279899   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:23.667361   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:23.681768   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:23.681828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:23.717721   71929 cri.go:89] found id: ""
	I0717 01:58:23.717748   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.717757   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:23.717763   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:23.717827   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:23.752699   71929 cri.go:89] found id: ""
	I0717 01:58:23.752728   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.752738   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:23.752745   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:23.752809   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:23.790914   71929 cri.go:89] found id: ""
	I0717 01:58:23.790944   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.790955   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:23.790962   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:23.791021   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:23.827253   71929 cri.go:89] found id: ""
	I0717 01:58:23.827276   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.827285   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:23.827338   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:23.827392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:23.864466   71929 cri.go:89] found id: ""
	I0717 01:58:23.864510   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.864520   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:23.864527   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:23.864577   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:23.900734   71929 cri.go:89] found id: ""
	I0717 01:58:23.900775   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.900786   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:23.900794   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:23.900855   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:23.937212   71929 cri.go:89] found id: ""
	I0717 01:58:23.937236   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.937243   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:23.937249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:23.937304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:23.973730   71929 cri.go:89] found id: ""
	I0717 01:58:23.973755   71929 logs.go:276] 0 containers: []
	W0717 01:58:23.973764   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:23.973774   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:23.973786   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:24.026122   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:24.026163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:24.040755   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:24.040784   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:24.112224   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:24.112254   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:24.112277   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:24.195247   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:24.195281   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:26.738042   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:26.751545   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:26.751602   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:26.786778   71929 cri.go:89] found id: ""
	I0717 01:58:26.786813   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.786824   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:26.786831   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:26.786889   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:26.828776   71929 cri.go:89] found id: ""
	I0717 01:58:26.828806   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.828818   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:26.828825   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:26.828887   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:26.868439   71929 cri.go:89] found id: ""
	I0717 01:58:26.868468   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.868479   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:26.868486   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:26.868546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:26.900249   71929 cri.go:89] found id: ""
	I0717 01:58:26.900282   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.900292   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:26.900297   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:26.900344   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:26.933763   71929 cri.go:89] found id: ""
	I0717 01:58:26.933798   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.933808   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:26.933816   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:26.933882   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:26.968681   71929 cri.go:89] found id: ""
	I0717 01:58:26.968712   71929 logs.go:276] 0 containers: []
	W0717 01:58:26.968722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:26.968729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:26.968788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:27.002081   71929 cri.go:89] found id: ""
	I0717 01:58:27.002113   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.002128   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:27.002135   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:27.002196   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:27.035138   71929 cri.go:89] found id: ""
	I0717 01:58:27.035161   71929 logs.go:276] 0 containers: []
	W0717 01:58:27.035170   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:27.035177   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:27.035189   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:27.091207   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:27.091244   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:27.105765   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:27.105793   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:27.175533   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:27.175563   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:27.175580   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:27.260903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:27.260951   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:24.802002   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.803330   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:26.432259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:28.433226   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:27.280206   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.781139   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:29.802451   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:29.816503   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:29.816573   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:29.854887   71929 cri.go:89] found id: ""
	I0717 01:58:29.854921   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.854931   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:29.854936   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:29.854983   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:29.887529   71929 cri.go:89] found id: ""
	I0717 01:58:29.887559   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.887570   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:29.887577   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:29.887638   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:29.924995   71929 cri.go:89] found id: ""
	I0717 01:58:29.925020   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.925028   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:29.925034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:29.925091   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:29.960064   71929 cri.go:89] found id: ""
	I0717 01:58:29.960092   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.960104   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:29.960111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:29.960178   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:29.995408   71929 cri.go:89] found id: ""
	I0717 01:58:29.995431   71929 logs.go:276] 0 containers: []
	W0717 01:58:29.995438   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:29.995443   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:29.995494   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:30.028219   71929 cri.go:89] found id: ""
	I0717 01:58:30.028247   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.028254   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:30.028260   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:30.028309   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:30.062529   71929 cri.go:89] found id: ""
	I0717 01:58:30.062576   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.062589   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:30.062597   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:30.062664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:30.095854   71929 cri.go:89] found id: ""
	I0717 01:58:30.095882   71929 logs.go:276] 0 containers: []
	W0717 01:58:30.095893   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:30.095904   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:30.095919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:30.148083   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:30.148114   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:30.161861   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:30.161892   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:30.236474   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:30.236503   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:30.236519   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:30.319691   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:30.319720   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:28.804656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:31.302637   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:30.932659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.934225   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.279141   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:34.279312   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:32.867821   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:32.881480   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:32.881541   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:32.918289   71929 cri.go:89] found id: ""
	I0717 01:58:32.918316   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.918327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:32.918335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:32.918396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:32.955383   71929 cri.go:89] found id: ""
	I0717 01:58:32.955417   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.955426   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:32.955433   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:32.955498   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:32.990432   71929 cri.go:89] found id: ""
	I0717 01:58:32.990460   71929 logs.go:276] 0 containers: []
	W0717 01:58:32.990467   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:32.990472   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:32.990531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:33.034653   71929 cri.go:89] found id: ""
	I0717 01:58:33.034685   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.034697   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:33.034703   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:33.034763   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:33.077875   71929 cri.go:89] found id: ""
	I0717 01:58:33.077911   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.077919   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:33.077926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:33.077988   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:33.114800   71929 cri.go:89] found id: ""
	I0717 01:58:33.114840   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.114852   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:33.114864   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:33.114946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:33.151095   71929 cri.go:89] found id: ""
	I0717 01:58:33.151229   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.151242   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:33.151249   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:33.151324   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:33.190100   71929 cri.go:89] found id: ""
	I0717 01:58:33.190128   71929 logs.go:276] 0 containers: []
	W0717 01:58:33.190138   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:33.190149   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:33.190163   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:33.271195   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:33.271231   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:33.317539   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:33.317569   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.370188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:33.370224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:33.385016   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:33.385045   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:33.460017   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:35.960499   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:35.974504   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:35.974583   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:36.008652   71929 cri.go:89] found id: ""
	I0717 01:58:36.008696   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.008704   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:36.008710   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:36.008770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:36.044068   71929 cri.go:89] found id: ""
	I0717 01:58:36.044097   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.044106   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:36.044113   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:36.044174   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:36.083572   71929 cri.go:89] found id: ""
	I0717 01:58:36.083602   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.083610   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:36.083616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:36.083682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:36.116716   71929 cri.go:89] found id: ""
	I0717 01:58:36.116744   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.116753   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:36.116761   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:36.116820   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:36.156042   71929 cri.go:89] found id: ""
	I0717 01:58:36.156069   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.156080   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:36.156087   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:36.156148   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:36.192005   71929 cri.go:89] found id: ""
	I0717 01:58:36.192033   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.192045   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:36.192055   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:36.192116   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:36.228720   71929 cri.go:89] found id: ""
	I0717 01:58:36.228751   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.228763   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:36.228769   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:36.228817   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:36.263835   71929 cri.go:89] found id: ""
	I0717 01:58:36.263862   71929 logs.go:276] 0 containers: []
	W0717 01:58:36.263872   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:36.263882   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:36.263897   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:36.278545   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:36.278609   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:36.361182   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:36.361208   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:36.361225   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:36.447797   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:36.447832   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:36.492167   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:36.492196   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:33.304750   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.803867   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:35.432659   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:37.433360   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.433481   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:36.282525   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:38.779592   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:39.045613   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:39.058615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:39.058688   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:39.094625   71929 cri.go:89] found id: ""
	I0717 01:58:39.094672   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.094684   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:39.094692   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:39.094755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:39.132856   71929 cri.go:89] found id: ""
	I0717 01:58:39.132887   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.132898   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:39.132905   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:39.132966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:39.171017   71929 cri.go:89] found id: ""
	I0717 01:58:39.171037   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.171044   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:39.171051   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:39.171112   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:39.210146   71929 cri.go:89] found id: ""
	I0717 01:58:39.210176   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.210186   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:39.210193   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:39.210269   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:39.244307   71929 cri.go:89] found id: ""
	I0717 01:58:39.244332   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.244342   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:39.244349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:39.244411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:39.279649   71929 cri.go:89] found id: ""
	I0717 01:58:39.279675   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.279682   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:39.279688   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:39.279755   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:39.317699   71929 cri.go:89] found id: ""
	I0717 01:58:39.317726   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.317735   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:39.317742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:39.317789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:39.352319   71929 cri.go:89] found id: ""
	I0717 01:58:39.352351   71929 logs.go:276] 0 containers: []
	W0717 01:58:39.352365   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:39.352377   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:39.352392   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:39.404153   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:39.404188   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:39.419796   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:39.419828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:39.495463   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:39.495485   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:39.495499   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:39.576742   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:39.576795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.132481   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:42.145588   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:42.145658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:42.181231   71929 cri.go:89] found id: ""
	I0717 01:58:42.181257   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.181265   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:42.181270   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:42.181321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:42.216876   71929 cri.go:89] found id: ""
	I0717 01:58:42.216905   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.216917   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:42.216923   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:42.216984   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:42.256918   71929 cri.go:89] found id: ""
	I0717 01:58:42.256948   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.256959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:42.256967   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:42.257022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:42.291930   71929 cri.go:89] found id: ""
	I0717 01:58:42.291957   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.291964   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:42.291975   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:42.292035   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:42.329927   71929 cri.go:89] found id: ""
	I0717 01:58:42.329954   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.329964   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:42.329970   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:42.330014   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:42.364041   71929 cri.go:89] found id: ""
	I0717 01:58:42.364072   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.364085   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:42.364093   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:42.364150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:38.302060   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.302711   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.303560   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:41.437100   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.932845   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:40.780109   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:43.280118   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:42.400751   71929 cri.go:89] found id: ""
	I0717 01:58:42.400775   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.400784   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:42.400790   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:42.400840   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:42.438200   71929 cri.go:89] found id: ""
	I0717 01:58:42.438228   71929 logs.go:276] 0 containers: []
	W0717 01:58:42.438240   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:42.438251   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:42.438265   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:42.455268   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:42.455303   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:42.537344   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:42.537368   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:42.537381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:42.618487   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:42.618522   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:42.661273   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:42.661299   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:45.212631   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:45.226247   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:45.226330   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:45.263067   71929 cri.go:89] found id: ""
	I0717 01:58:45.263098   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.263110   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:45.263117   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:45.263177   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:45.299025   71929 cri.go:89] found id: ""
	I0717 01:58:45.299056   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.299067   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:45.299074   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:45.299137   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:45.346828   71929 cri.go:89] found id: ""
	I0717 01:58:45.346858   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.346868   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:45.346876   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:45.346938   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:45.390879   71929 cri.go:89] found id: ""
	I0717 01:58:45.390905   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.390913   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:45.390918   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:45.390966   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:45.426794   71929 cri.go:89] found id: ""
	I0717 01:58:45.426823   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.426834   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:45.426841   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:45.426902   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:45.463834   71929 cri.go:89] found id: ""
	I0717 01:58:45.463863   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.463873   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:45.463880   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:45.463942   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:45.500660   71929 cri.go:89] found id: ""
	I0717 01:58:45.500689   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.500701   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:45.500708   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:45.500766   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:45.537332   71929 cri.go:89] found id: ""
	I0717 01:58:45.537356   71929 logs.go:276] 0 containers: []
	W0717 01:58:45.537364   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:45.537373   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:45.537388   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:45.551194   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:45.551222   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:45.623863   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:45.623892   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:45.623906   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:45.699740   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:45.699782   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:45.739580   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:45.739613   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:44.803138   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:47.302471   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:46.434311   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.933004   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:45.779778   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.279595   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:48.300789   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:48.315608   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:48.315667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:48.353050   71929 cri.go:89] found id: ""
	I0717 01:58:48.353076   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.353084   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:48.353089   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:48.353133   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:48.394789   71929 cri.go:89] found id: ""
	I0717 01:58:48.394817   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.394829   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:48.394837   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:48.394900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:48.433430   71929 cri.go:89] found id: ""
	I0717 01:58:48.433457   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.433468   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:48.433475   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:48.433530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:48.467215   71929 cri.go:89] found id: ""
	I0717 01:58:48.467243   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.467254   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:48.467262   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:48.467318   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:48.501087   71929 cri.go:89] found id: ""
	I0717 01:58:48.501120   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.501131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:48.501138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:48.501204   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:48.538648   71929 cri.go:89] found id: ""
	I0717 01:58:48.538683   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.538696   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:48.538706   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:48.538762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:48.573006   71929 cri.go:89] found id: ""
	I0717 01:58:48.573030   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.573040   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:48.573047   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:48.573106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:48.608779   71929 cri.go:89] found id: ""
	I0717 01:58:48.608803   71929 logs.go:276] 0 containers: []
	W0717 01:58:48.608813   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:48.608824   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:48.608837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:48.659250   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:48.659290   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:48.673418   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:48.673449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:48.748175   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:48.748196   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:48.748207   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:48.824238   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:48.824274   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:51.367155   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:51.382458   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:51.382527   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:51.424005   71929 cri.go:89] found id: ""
	I0717 01:58:51.424040   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.424051   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:51.424059   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:51.424117   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:51.463318   71929 cri.go:89] found id: ""
	I0717 01:58:51.463348   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.463357   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:51.463363   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:51.463414   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:51.502261   71929 cri.go:89] found id: ""
	I0717 01:58:51.502290   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.502301   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:51.502309   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:51.502362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:51.536277   71929 cri.go:89] found id: ""
	I0717 01:58:51.536308   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.536319   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:51.536327   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:51.536392   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:51.580598   71929 cri.go:89] found id: ""
	I0717 01:58:51.580629   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.580640   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:51.580648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:51.580726   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:51.618666   71929 cri.go:89] found id: ""
	I0717 01:58:51.618690   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.618697   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:51.618702   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:51.618747   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:51.654742   71929 cri.go:89] found id: ""
	I0717 01:58:51.654777   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.654790   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:51.654799   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:51.654863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:51.698006   71929 cri.go:89] found id: ""
	I0717 01:58:51.698034   71929 logs.go:276] 0 containers: []
	W0717 01:58:51.698043   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:51.698051   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:51.698062   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:51.754812   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:51.754852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:51.771887   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:51.771919   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:51.859627   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:51.859657   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:51.859675   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:51.946633   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:51.946673   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:49.302540   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.803884   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:51.433981   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.933306   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:50.781428   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:53.279780   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:54.494188   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:54.509111   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:54.509190   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:54.546424   71929 cri.go:89] found id: ""
	I0717 01:58:54.546454   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.546464   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:54.546471   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:54.546532   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:54.586811   71929 cri.go:89] found id: ""
	I0717 01:58:54.586841   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.586853   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:54.586860   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:54.586918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:54.627350   71929 cri.go:89] found id: ""
	I0717 01:58:54.627375   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.627383   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:54.627388   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:54.627438   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:54.665901   71929 cri.go:89] found id: ""
	I0717 01:58:54.665941   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.665954   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:54.665974   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:54.666041   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:54.702921   71929 cri.go:89] found id: ""
	I0717 01:58:54.702948   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.702958   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:54.702965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:54.703027   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:54.737378   71929 cri.go:89] found id: ""
	I0717 01:58:54.737406   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.737414   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:54.737421   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:54.737469   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:54.771924   71929 cri.go:89] found id: ""
	I0717 01:58:54.771954   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.771964   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:54.771971   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:54.772055   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:54.812939   71929 cri.go:89] found id: ""
	I0717 01:58:54.812972   71929 logs.go:276] 0 containers: []
	W0717 01:58:54.812983   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:54.812995   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:54.813010   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:54.862979   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:54.863013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:54.877467   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:54.877504   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:54.953924   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:54.953950   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:54.953963   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:55.032019   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:55.032052   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:54.302727   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:56.311656   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.933968   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:58.432611   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:55.778263   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.781311   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.278937   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:58:57.573130   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:58:57.591689   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:58:57.591762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:58:57.626444   71929 cri.go:89] found id: ""
	I0717 01:58:57.626469   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.626479   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:58:57.626486   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:58:57.626570   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:58:57.661280   71929 cri.go:89] found id: ""
	I0717 01:58:57.661305   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.661314   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:58:57.661321   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:58:57.661376   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:58:57.695678   71929 cri.go:89] found id: ""
	I0717 01:58:57.695703   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.695711   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:58:57.695717   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:58:57.695762   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:58:57.729705   71929 cri.go:89] found id: ""
	I0717 01:58:57.729734   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.729742   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:58:57.729748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:58:57.729804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:58:57.763338   71929 cri.go:89] found id: ""
	I0717 01:58:57.763365   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.763373   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:58:57.763387   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:58:57.763433   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:58:57.800576   71929 cri.go:89] found id: ""
	I0717 01:58:57.800600   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.800608   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:58:57.800615   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:58:57.800701   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:58:57.842401   71929 cri.go:89] found id: ""
	I0717 01:58:57.842428   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.842439   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:58:57.842446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:58:57.842503   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:58:57.880355   71929 cri.go:89] found id: ""
	I0717 01:58:57.880379   71929 logs.go:276] 0 containers: []
	W0717 01:58:57.880387   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:58:57.880395   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:58:57.880412   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:58:57.938215   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:58:57.938252   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:58:57.952835   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:58:57.952876   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:58:58.027203   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:58:58.027231   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:58:58.027246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:58:58.108442   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:58:58.108483   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:00.648580   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:00.662596   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:00.662667   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:00.696315   71929 cri.go:89] found id: ""
	I0717 01:59:00.696342   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.696351   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:00.696356   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:00.696411   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:00.732117   71929 cri.go:89] found id: ""
	I0717 01:59:00.732147   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.732158   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:00.732164   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:00.732212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:00.768747   71929 cri.go:89] found id: ""
	I0717 01:59:00.768779   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.768790   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:00.768797   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:00.768856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:00.807557   71929 cri.go:89] found id: ""
	I0717 01:59:00.807585   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.807592   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:00.807598   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:00.807651   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:00.844127   71929 cri.go:89] found id: ""
	I0717 01:59:00.844152   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.844161   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:00.844166   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:00.844222   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:00.879565   71929 cri.go:89] found id: ""
	I0717 01:59:00.879590   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.879597   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:00.879613   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:00.879684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:00.917352   71929 cri.go:89] found id: ""
	I0717 01:59:00.917379   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.917387   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:00.917393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:00.917440   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:00.952603   71929 cri.go:89] found id: ""
	I0717 01:59:00.952630   71929 logs.go:276] 0 containers: []
	W0717 01:59:00.952637   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:00.952647   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:00.952688   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:01.007203   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:01.007242   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:01.021476   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:01.021512   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:01.102283   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:01.102306   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:01.102320   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:01.175736   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:01.175771   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:58:58.803034   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.803718   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:00.932781   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.433188   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:02.281269   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:04.779257   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:03.717612   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:03.732446   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:03.732511   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:03.767485   71929 cri.go:89] found id: ""
	I0717 01:59:03.767519   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.767533   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:03.767542   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:03.767607   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:03.803961   71929 cri.go:89] found id: ""
	I0717 01:59:03.803989   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.804000   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:03.804007   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:03.804074   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:03.842734   71929 cri.go:89] found id: ""
	I0717 01:59:03.842768   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.842780   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:03.842788   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:03.842915   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:03.883571   71929 cri.go:89] found id: ""
	I0717 01:59:03.883598   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.883608   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:03.883616   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:03.883682   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:03.922037   71929 cri.go:89] found id: ""
	I0717 01:59:03.922065   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.922076   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:03.922084   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:03.922143   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:03.961135   71929 cri.go:89] found id: ""
	I0717 01:59:03.961165   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.961176   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:03.961183   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:03.961244   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:03.995542   71929 cri.go:89] found id: ""
	I0717 01:59:03.995570   71929 logs.go:276] 0 containers: []
	W0717 01:59:03.995580   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:03.995589   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:03.995647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:04.030142   71929 cri.go:89] found id: ""
	I0717 01:59:04.030170   71929 logs.go:276] 0 containers: []
	W0717 01:59:04.030178   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:04.030187   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:04.030198   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:04.110329   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:04.110366   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:04.152194   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:04.152224   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:04.204012   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:04.204048   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:04.218261   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:04.218291   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:04.290786   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:06.791166   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:06.806662   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:06.806722   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:06.841447   71929 cri.go:89] found id: ""
	I0717 01:59:06.841476   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.841486   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:06.841494   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:06.841554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:06.879920   71929 cri.go:89] found id: ""
	I0717 01:59:06.879956   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.879971   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:06.879976   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:06.880033   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:06.914436   71929 cri.go:89] found id: ""
	I0717 01:59:06.914465   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.914476   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:06.914484   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:06.914566   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:06.952098   71929 cri.go:89] found id: ""
	I0717 01:59:06.952127   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.952135   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:06.952141   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:06.952187   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:06.988054   71929 cri.go:89] found id: ""
	I0717 01:59:06.988085   71929 logs.go:276] 0 containers: []
	W0717 01:59:06.988096   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:06.988103   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:06.988168   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:07.026633   71929 cri.go:89] found id: ""
	I0717 01:59:07.026658   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.026670   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:07.026676   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:07.026732   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:07.064433   71929 cri.go:89] found id: ""
	I0717 01:59:07.064454   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.064463   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:07.064468   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:07.064545   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:07.108352   71929 cri.go:89] found id: ""
	I0717 01:59:07.108385   71929 logs.go:276] 0 containers: []
	W0717 01:59:07.108396   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:07.108410   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:07.108428   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:07.163554   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:07.163593   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:07.177221   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:07.177249   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:07.249712   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:07.249735   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:07.249748   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:07.333011   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:07.333044   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:03.303048   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.304001   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.314317   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:05.932370   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:07.933031   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.933728   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:06.780342   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.279683   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:09.873187   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:09.887579   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:09.887658   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:09.923675   71929 cri.go:89] found id: ""
	I0717 01:59:09.923706   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.923716   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:09.923724   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:09.923789   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:09.961248   71929 cri.go:89] found id: ""
	I0717 01:59:09.961278   71929 logs.go:276] 0 containers: []
	W0717 01:59:09.961288   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:09.961296   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:09.961354   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:10.000069   71929 cri.go:89] found id: ""
	I0717 01:59:10.000094   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.000101   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:10.000107   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:10.000157   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:10.036784   71929 cri.go:89] found id: ""
	I0717 01:59:10.036808   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.036815   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:10.036820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:10.036869   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:10.072746   71929 cri.go:89] found id: ""
	I0717 01:59:10.072778   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.072789   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:10.072796   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:10.072856   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:10.109520   71929 cri.go:89] found id: ""
	I0717 01:59:10.109544   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.109552   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:10.109557   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:10.109608   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:10.142521   71929 cri.go:89] found id: ""
	I0717 01:59:10.142565   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.142576   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:10.142584   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:10.142647   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:10.175772   71929 cri.go:89] found id: ""
	I0717 01:59:10.175800   71929 logs.go:276] 0 containers: []
	W0717 01:59:10.175812   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:10.175823   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:10.175837   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:10.213534   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:10.213561   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:10.266449   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:10.266485   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:10.282204   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:10.282234   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:10.353974   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:10.354004   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:10.354017   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:09.802047   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.802200   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.433722   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:14.932285   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:11.780394   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:13.781691   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:12.936509   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:12.951547   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:12.951616   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:12.987833   71929 cri.go:89] found id: ""
	I0717 01:59:12.987860   71929 logs.go:276] 0 containers: []
	W0717 01:59:12.987868   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:12.987873   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:12.987922   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:13.026500   71929 cri.go:89] found id: ""
	I0717 01:59:13.026529   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.026539   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:13.026546   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:13.026625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:13.061631   71929 cri.go:89] found id: ""
	I0717 01:59:13.061664   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.061674   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:13.061682   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:13.061745   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:13.099449   71929 cri.go:89] found id: ""
	I0717 01:59:13.099476   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.099487   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:13.099494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:13.099565   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:13.137271   71929 cri.go:89] found id: ""
	I0717 01:59:13.137299   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.137309   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:13.137317   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:13.137384   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:13.174432   71929 cri.go:89] found id: ""
	I0717 01:59:13.174462   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.174472   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:13.174478   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:13.174539   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:13.212820   71929 cri.go:89] found id: ""
	I0717 01:59:13.212845   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.212855   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:13.212865   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:13.212930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:13.245961   71929 cri.go:89] found id: ""
	I0717 01:59:13.245993   71929 logs.go:276] 0 containers: []
	W0717 01:59:13.246004   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:13.246014   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:13.246028   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:13.284801   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:13.284828   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.338476   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:13.338511   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:13.352751   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:13.352777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:13.434001   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:13.434035   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:13.434050   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.022525   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:16.036863   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:16.036941   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:16.074370   71929 cri.go:89] found id: ""
	I0717 01:59:16.074398   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.074409   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:16.074416   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:16.074476   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:16.112239   71929 cri.go:89] found id: ""
	I0717 01:59:16.112267   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.112276   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:16.112281   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:16.112329   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:16.147398   71929 cri.go:89] found id: ""
	I0717 01:59:16.147422   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.147429   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:16.147435   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:16.147490   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:16.182112   71929 cri.go:89] found id: ""
	I0717 01:59:16.182141   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.182149   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:16.182155   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:16.182203   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:16.219134   71929 cri.go:89] found id: ""
	I0717 01:59:16.219163   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.219174   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:16.219182   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:16.219238   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:16.255892   71929 cri.go:89] found id: ""
	I0717 01:59:16.255924   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.255934   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:16.255943   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:16.256003   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:16.291202   71929 cri.go:89] found id: ""
	I0717 01:59:16.291228   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.291238   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:16.291245   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:16.291304   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:16.330748   71929 cri.go:89] found id: ""
	I0717 01:59:16.330779   71929 logs.go:276] 0 containers: []
	W0717 01:59:16.330790   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:16.330801   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:16.330815   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:16.344628   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:16.344668   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:16.415735   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:16.415761   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:16.415775   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:16.499411   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:16.499449   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:16.541244   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:16.541270   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:13.802477   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.311229   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.933493   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.934299   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:16.279421   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:18.778998   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:19.095060   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:19.107920   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:19.107976   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:19.143446   71929 cri.go:89] found id: ""
	I0717 01:59:19.143476   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.143485   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:19.143490   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:19.143550   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:19.179216   71929 cri.go:89] found id: ""
	I0717 01:59:19.179247   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.179259   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:19.179266   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:19.179317   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:19.212468   71929 cri.go:89] found id: ""
	I0717 01:59:19.212498   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.212508   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:19.212516   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:19.212574   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:19.245019   71929 cri.go:89] found id: ""
	I0717 01:59:19.245047   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.245058   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:19.245065   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:19.245123   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:19.278430   71929 cri.go:89] found id: ""
	I0717 01:59:19.278457   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.278467   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:19.278474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:19.278530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:19.317685   71929 cri.go:89] found id: ""
	I0717 01:59:19.317714   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.317722   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:19.317729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:19.317783   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:19.352938   71929 cri.go:89] found id: ""
	I0717 01:59:19.352974   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.352986   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:19.353000   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:19.353052   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:19.387238   71929 cri.go:89] found id: ""
	I0717 01:59:19.387272   71929 logs.go:276] 0 containers: []
	W0717 01:59:19.387283   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:19.387295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:19.387314   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:19.440138   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:19.440171   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:19.456372   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:19.456402   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:19.527881   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:19.527906   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:19.527921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:19.611903   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:19.611937   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:22.160422   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:22.172802   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:22.172862   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:22.209283   71929 cri.go:89] found id: ""
	I0717 01:59:22.209315   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.209327   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:22.209335   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:22.209396   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:22.243927   71929 cri.go:89] found id: ""
	I0717 01:59:22.243955   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.243965   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:22.243972   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:22.244022   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:22.276730   71929 cri.go:89] found id: ""
	I0717 01:59:22.276754   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.276761   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:22.276767   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:22.276814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:22.319378   71929 cri.go:89] found id: ""
	I0717 01:59:22.319407   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.319418   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:22.319425   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:22.319482   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:22.358272   71929 cri.go:89] found id: ""
	I0717 01:59:22.358298   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.358307   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:22.358312   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:22.358362   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:22.395358   71929 cri.go:89] found id: ""
	I0717 01:59:22.395393   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.395405   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:22.395414   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:22.395477   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:18.802881   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.303532   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.433636   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.932345   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:21.279596   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:23.279700   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.280649   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:22.435158   71929 cri.go:89] found id: ""
	I0717 01:59:22.435184   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.435194   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:22.435201   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:22.435248   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:22.471553   71929 cri.go:89] found id: ""
	I0717 01:59:22.471588   71929 logs.go:276] 0 containers: []
	W0717 01:59:22.471595   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:22.471604   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:22.471616   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:22.523133   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:22.523169   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:22.539212   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:22.539246   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:22.615707   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:22.615729   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:22.615744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:22.696758   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:22.696795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:25.238496   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:25.252882   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:25.252946   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:25.290173   71929 cri.go:89] found id: ""
	I0717 01:59:25.290197   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.290205   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:25.290210   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:25.290263   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:25.325926   71929 cri.go:89] found id: ""
	I0717 01:59:25.325968   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.325979   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:25.325985   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:25.326032   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:25.358310   71929 cri.go:89] found id: ""
	I0717 01:59:25.358362   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.358371   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:25.358377   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:25.358426   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:25.393575   71929 cri.go:89] found id: ""
	I0717 01:59:25.393605   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.393615   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:25.393622   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:25.393684   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:25.429357   71929 cri.go:89] found id: ""
	I0717 01:59:25.429448   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.429466   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:25.429474   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:25.429546   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:25.466992   71929 cri.go:89] found id: ""
	I0717 01:59:25.467020   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.467028   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:25.467034   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:25.467080   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:25.503545   71929 cri.go:89] found id: ""
	I0717 01:59:25.503575   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.503587   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:25.503594   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:25.503643   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:25.542957   71929 cri.go:89] found id: ""
	I0717 01:59:25.542983   71929 logs.go:276] 0 containers: []
	W0717 01:59:25.542993   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:25.543003   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:25.543015   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:25.598813   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:25.598852   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:25.618060   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:25.618098   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:25.690079   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:25.690105   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:25.690119   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:25.765956   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:25.765994   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:23.803366   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.804525   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:25.932447   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.933276   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.933461   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:27.286160   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:29.781318   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:28.311715   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:28.325493   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:28.325554   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:28.365783   71929 cri.go:89] found id: ""
	I0717 01:59:28.365810   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.365821   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:28.365829   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:28.365885   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:28.401847   71929 cri.go:89] found id: ""
	I0717 01:59:28.401875   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.401883   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:28.401895   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:28.401954   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:28.442236   71929 cri.go:89] found id: ""
	I0717 01:59:28.442261   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.442272   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:28.442278   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:28.442340   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:28.476832   71929 cri.go:89] found id: ""
	I0717 01:59:28.476857   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.476866   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:28.476873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:28.476928   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:28.512040   71929 cri.go:89] found id: ""
	I0717 01:59:28.512068   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.512075   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:28.512081   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:28.512136   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:28.547516   71929 cri.go:89] found id: ""
	I0717 01:59:28.547547   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.547558   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:28.547566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:28.547625   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:28.580380   71929 cri.go:89] found id: ""
	I0717 01:59:28.580406   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.580417   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:28.580427   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:28.580485   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:28.616029   71929 cri.go:89] found id: ""
	I0717 01:59:28.616059   71929 logs.go:276] 0 containers: []
	W0717 01:59:28.616069   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:28.616080   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:28.616095   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:28.670188   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:28.670230   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:28.687315   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:28.687355   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:28.763591   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:28.763612   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:28.763627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:28.848925   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:28.848959   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:31.388294   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:31.404748   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:31.404814   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:31.446437   71929 cri.go:89] found id: ""
	I0717 01:59:31.446468   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.446478   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:31.446484   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:31.446531   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:31.487797   71929 cri.go:89] found id: ""
	I0717 01:59:31.487828   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.487840   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:31.487847   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:31.487895   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:31.525327   71929 cri.go:89] found id: ""
	I0717 01:59:31.525354   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.525368   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:31.525375   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:31.525436   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:31.564106   71929 cri.go:89] found id: ""
	I0717 01:59:31.564154   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.564166   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:31.564173   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:31.564234   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:31.603345   71929 cri.go:89] found id: ""
	I0717 01:59:31.603374   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.603385   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:31.603393   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:31.603456   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:31.641727   71929 cri.go:89] found id: ""
	I0717 01:59:31.641753   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.641769   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:31.641776   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:31.641837   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:31.680825   71929 cri.go:89] found id: ""
	I0717 01:59:31.680856   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.680866   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:31.680873   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:31.680930   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:31.714325   71929 cri.go:89] found id: ""
	I0717 01:59:31.714348   71929 logs.go:276] 0 containers: []
	W0717 01:59:31.714355   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:31.714363   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:31.714374   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:31.765899   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:31.765934   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:31.781417   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:31.781447   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:31.857586   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:31.857607   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:31.857622   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:31.937171   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:31.937197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:28.304014   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:30.802684   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:32.803604   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.933945   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.435259   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:31.785641   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.279814   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:34.478176   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:34.492153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:34.492223   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:34.526959   71929 cri.go:89] found id: ""
	I0717 01:59:34.526984   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.526998   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:34.527006   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:34.527064   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:34.564485   71929 cri.go:89] found id: ""
	I0717 01:59:34.564534   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.564546   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:34.564591   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:34.564706   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:34.604611   71929 cri.go:89] found id: ""
	I0717 01:59:34.604637   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.604649   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:34.604657   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:34.604718   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:34.640851   71929 cri.go:89] found id: ""
	I0717 01:59:34.640882   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.640892   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:34.640897   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:34.640956   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:34.675828   71929 cri.go:89] found id: ""
	I0717 01:59:34.675856   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.675863   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:34.675869   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:34.675918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:34.710468   71929 cri.go:89] found id: ""
	I0717 01:59:34.710496   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.710506   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:34.710514   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:34.710595   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:34.749218   71929 cri.go:89] found id: ""
	I0717 01:59:34.749249   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.749260   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:34.749267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:34.749328   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:34.784934   71929 cri.go:89] found id: ""
	I0717 01:59:34.784969   71929 logs.go:276] 0 containers: []
	W0717 01:59:34.784979   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:34.784990   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:34.785006   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:34.799836   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:34.799870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:34.870218   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:34.870239   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:34.870254   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:34.948782   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:34.948817   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:34.992295   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:34.992324   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:34.803649   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.304530   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.933199   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:39.432504   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:36.280185   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:38.280499   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:37.545759   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:37.559648   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:37.559724   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:37.596642   71929 cri.go:89] found id: ""
	I0717 01:59:37.596696   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.596707   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:37.596715   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:37.596770   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:37.637251   71929 cri.go:89] found id: ""
	I0717 01:59:37.637283   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.637312   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:37.637318   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:37.637372   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:37.672811   71929 cri.go:89] found id: ""
	I0717 01:59:37.672839   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.672847   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:37.672852   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:37.672909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:37.706864   71929 cri.go:89] found id: ""
	I0717 01:59:37.706904   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.706916   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:37.706923   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:37.706975   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:37.747539   71929 cri.go:89] found id: ""
	I0717 01:59:37.747567   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.747576   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:37.747581   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:37.747630   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:37.785229   71929 cri.go:89] found id: ""
	I0717 01:59:37.785251   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.785260   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:37.785268   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:37.785333   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:37.840428   71929 cri.go:89] found id: ""
	I0717 01:59:37.840460   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.840471   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:37.840477   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:37.840533   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:37.876888   71929 cri.go:89] found id: ""
	I0717 01:59:37.876916   71929 logs.go:276] 0 containers: []
	W0717 01:59:37.876924   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:37.876932   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:37.876942   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:37.926161   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:37.926197   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:37.940857   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:37.940885   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:38.019210   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:38.019232   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:38.019245   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:38.112428   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:38.112471   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:40.657215   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:40.670824   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:40.670900   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:40.704008   71929 cri.go:89] found id: ""
	I0717 01:59:40.704030   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.704040   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:40.704048   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:40.704102   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:40.739544   71929 cri.go:89] found id: ""
	I0717 01:59:40.739576   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.739587   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:40.739595   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:40.739664   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:40.773132   71929 cri.go:89] found id: ""
	I0717 01:59:40.773159   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.773169   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:40.773177   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:40.773239   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:40.810162   71929 cri.go:89] found id: ""
	I0717 01:59:40.810183   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.810193   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:40.810200   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:40.810256   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:40.844797   71929 cri.go:89] found id: ""
	I0717 01:59:40.844829   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.844840   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:40.844847   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:40.844918   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:40.884444   71929 cri.go:89] found id: ""
	I0717 01:59:40.884468   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.884476   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:40.884482   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:40.884544   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:40.919413   71929 cri.go:89] found id: ""
	I0717 01:59:40.919437   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.919445   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:40.919451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:40.919505   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:40.961870   71929 cri.go:89] found id: ""
	I0717 01:59:40.961894   71929 logs.go:276] 0 containers: []
	W0717 01:59:40.961902   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:40.961910   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:40.961921   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:41.010600   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:41.010638   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:41.025557   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:41.025589   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:41.100100   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:41.100123   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:41.100135   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:41.185809   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:41.185850   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:39.802297   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.802803   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:41.432998   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.433412   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:40.779796   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:42.781981   71522 pod_ready.go:102] pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:43.279014   71522 pod_ready.go:81] duration metric: took 4m0.006085275s for pod "metrics-server-569cc877fc-gcjkt" in "kube-system" namespace to be "Ready" ...
	E0717 01:59:43.279043   71522 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 01:59:43.279053   71522 pod_ready.go:38] duration metric: took 4m2.008175999s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 01:59:43.279073   71522 api_server.go:52] waiting for apiserver process to appear ...
	I0717 01:59:43.279105   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.279162   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.327674   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:43.327725   71522 cri.go:89] found id: ""
	I0717 01:59:43.327734   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:43.327801   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.332247   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.332303   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.371598   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.371627   71522 cri.go:89] found id: ""
	I0717 01:59:43.371635   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:43.371683   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.377203   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.377265   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.416351   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.416374   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.416380   71522 cri.go:89] found id: ""
	I0717 01:59:43.416389   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:43.416448   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.420909   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.425228   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.425278   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.472117   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.472139   71522 cri.go:89] found id: ""
	I0717 01:59:43.472147   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:43.472194   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.476632   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.476698   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.517337   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:43.517360   71522 cri.go:89] found id: ""
	I0717 01:59:43.517369   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:43.517430   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.522437   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.522519   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.564511   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.564530   71522 cri.go:89] found id: ""
	I0717 01:59:43.564537   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:43.564595   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.570357   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.570440   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:43.615389   71522 cri.go:89] found id: ""
	I0717 01:59:43.615418   71522 logs.go:276] 0 containers: []
	W0717 01:59:43.615427   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:43.615433   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:43.615543   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:43.652739   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:43.652764   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:43.652769   71522 cri.go:89] found id: ""
	I0717 01:59:43.652777   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:43.652835   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.657323   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:43.661682   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:43.661702   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:43.714396   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:43.714434   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.761072   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:43.761110   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:43.825934   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:43.825963   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:43.871287   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:43.871316   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:43.907488   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:43.907517   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:43.949876   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:43.949903   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:44.093084   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:44.093116   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:44.153161   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:44.153206   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:44.197219   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:44.197249   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:44.242441   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:44.242478   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:44.288622   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.288646   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.839680   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.839712   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.854119   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:44.854145   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:43.725542   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:43.739304   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:43.739379   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:43.776754   71929 cri.go:89] found id: ""
	I0717 01:59:43.776783   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.776795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:43.776802   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:43.776863   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:43.819729   71929 cri.go:89] found id: ""
	I0717 01:59:43.819756   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.819767   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:43.819774   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:43.819828   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:43.860283   71929 cri.go:89] found id: ""
	I0717 01:59:43.860311   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.860322   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:43.860329   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:43.860391   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:43.898684   71929 cri.go:89] found id: ""
	I0717 01:59:43.898712   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.898722   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:43.898729   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:43.898788   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:43.942996   71929 cri.go:89] found id: ""
	I0717 01:59:43.943019   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.943026   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:43.943031   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:43.943077   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:43.981799   71929 cri.go:89] found id: ""
	I0717 01:59:43.981828   71929 logs.go:276] 0 containers: []
	W0717 01:59:43.981839   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:43.981846   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:43.981903   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:44.018222   71929 cri.go:89] found id: ""
	I0717 01:59:44.018252   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.018262   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:44.018267   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:44.018326   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:44.056264   71929 cri.go:89] found id: ""
	I0717 01:59:44.056293   71929 logs.go:276] 0 containers: []
	W0717 01:59:44.056304   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:44.056315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:44.056334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:44.172061   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:44.172108   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:44.219597   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:44.219627   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:44.272299   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:44.272334   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:44.287811   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:44.287848   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:44.379183   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:46.879529   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:46.893142   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:46.893207   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:46.929073   71929 cri.go:89] found id: ""
	I0717 01:59:46.929101   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.929113   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:46.929121   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:46.929173   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:46.963697   71929 cri.go:89] found id: ""
	I0717 01:59:46.963725   71929 logs.go:276] 0 containers: []
	W0717 01:59:46.963733   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:46.963739   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:46.963798   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.000697   71929 cri.go:89] found id: ""
	I0717 01:59:47.000730   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.000747   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:47.000752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.000804   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.037270   71929 cri.go:89] found id: ""
	I0717 01:59:47.037304   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.037316   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:47.037323   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.037382   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.072210   71929 cri.go:89] found id: ""
	I0717 01:59:47.072238   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.072249   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:47.072256   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.072321   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.108404   71929 cri.go:89] found id: ""
	I0717 01:59:47.108432   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.108443   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:47.108451   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.108535   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.146122   71929 cri.go:89] found id: ""
	I0717 01:59:47.146151   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.146162   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.146169   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:47.146225   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:47.187418   71929 cri.go:89] found id: ""
	I0717 01:59:47.187446   71929 logs.go:276] 0 containers: []
	W0717 01:59:47.187455   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:47.187466   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:47.187481   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:47.201023   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:47.201053   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:47.269851   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:47.269878   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.269894   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:47.356417   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:47.356456   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:43.803326   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:46.302939   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:45.433688   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.933271   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:49.934222   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:47.403005   71522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:47.420984   71522 api_server.go:72] duration metric: took 4m13.369710312s to wait for apiserver process to appear ...
	I0717 01:59:47.421011   71522 api_server.go:88] waiting for apiserver healthz status ...
	I0717 01:59:47.421065   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:47.421128   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:47.465800   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:47.465830   71522 cri.go:89] found id: ""
	I0717 01:59:47.465838   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:47.465890   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.470561   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:47.470628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:47.513302   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:47.513321   71522 cri.go:89] found id: ""
	I0717 01:59:47.513328   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:47.513373   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.517668   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:47.517720   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:47.563466   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:47.563491   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:47.563495   71522 cri.go:89] found id: ""
	I0717 01:59:47.563502   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:47.563563   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.568058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.572381   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:47.572432   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:47.618919   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:47.618944   71522 cri.go:89] found id: ""
	I0717 01:59:47.618953   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:47.619014   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.623475   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:47.623525   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:47.662294   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:47.662321   71522 cri.go:89] found id: ""
	I0717 01:59:47.662329   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:47.662384   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.666740   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:47.666806   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:47.708962   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:47.708990   71522 cri.go:89] found id: ""
	I0717 01:59:47.708999   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:47.709058   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.713551   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:47.713628   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:47.750766   71522 cri.go:89] found id: ""
	I0717 01:59:47.750797   71522 logs.go:276] 0 containers: []
	W0717 01:59:47.750807   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:47.750814   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:47.750878   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:47.786664   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:47.786687   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:47.786692   71522 cri.go:89] found id: ""
	I0717 01:59:47.786699   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:47.786761   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.791460   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:47.795553   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:47.795576   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:48.298229   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:48.298271   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:48.313542   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:48.313573   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:48.429625   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:48.429663   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:48.475651   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:48.475677   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:48.514075   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:48.514101   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:48.550152   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:48.550182   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:48.592743   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:48.592771   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:48.652433   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:48.652464   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:48.699763   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:48.699796   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:48.737467   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:48.737504   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:48.788389   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:48.788425   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:48.842323   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:48.842357   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:48.900716   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:48.900746   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:47.397763   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:47.397791   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:49.954670   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:49.968840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:49.968898   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:50.003598   71929 cri.go:89] found id: ""
	I0717 01:59:50.003635   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.003646   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:50.003654   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:50.003714   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:50.040494   71929 cri.go:89] found id: ""
	I0717 01:59:50.040546   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.040558   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:50.040564   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:50.040624   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:50.074921   71929 cri.go:89] found id: ""
	I0717 01:59:50.074950   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.074959   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:50.074965   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:50.075015   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:50.117002   71929 cri.go:89] found id: ""
	I0717 01:59:50.117030   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.117041   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:50.117049   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:50.117106   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:50.163026   71929 cri.go:89] found id: ""
	I0717 01:59:50.163052   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.163063   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:50.163071   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:50.163129   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:50.197709   71929 cri.go:89] found id: ""
	I0717 01:59:50.197738   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.197749   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:50.197757   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:50.197838   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:50.237776   71929 cri.go:89] found id: ""
	I0717 01:59:50.237808   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.237819   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:50.237827   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:50.237886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:50.275147   71929 cri.go:89] found id: ""
	I0717 01:59:50.275179   71929 logs.go:276] 0 containers: []
	W0717 01:59:50.275189   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:50.275201   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:50.275215   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:50.329025   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:50.329057   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:50.342745   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:50.342777   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:50.417792   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:50.417817   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:50.417829   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:50.495288   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:50.495322   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:48.306102   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:50.804255   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:52.433248   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:54.931595   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:51.447495   71522 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8444/healthz ...
	I0717 01:59:51.452186   71522 api_server.go:279] https://192.168.39.170:8444/healthz returned 200:
	ok
	I0717 01:59:51.453112   71522 api_server.go:141] control plane version: v1.30.2
	I0717 01:59:51.453137   71522 api_server.go:131] duration metric: took 4.032118004s to wait for apiserver health ...
	I0717 01:59:51.453146   71522 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 01:59:51.453170   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:51.453215   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:51.491272   71522 cri.go:89] found id: "3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:51.491297   71522 cri.go:89] found id: ""
	I0717 01:59:51.491305   71522 logs.go:276] 1 containers: [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82]
	I0717 01:59:51.491365   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.495747   71522 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:51.495795   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:51.538807   71522 cri.go:89] found id: "5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:51.538830   71522 cri.go:89] found id: ""
	I0717 01:59:51.538838   71522 logs.go:276] 1 containers: [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9]
	I0717 01:59:51.538891   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.543454   71522 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:51.543512   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:51.586258   71522 cri.go:89] found id: "92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:51.586292   71522 cri.go:89] found id: "4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:51.586296   71522 cri.go:89] found id: ""
	I0717 01:59:51.586306   71522 logs.go:276] 2 containers: [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013]
	I0717 01:59:51.586360   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.590446   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.594867   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:51.594936   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:51.636079   71522 cri.go:89] found id: "1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:51.636101   71522 cri.go:89] found id: ""
	I0717 01:59:51.636108   71522 logs.go:276] 1 containers: [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8]
	I0717 01:59:51.636159   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.640225   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:51.640283   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:51.676395   71522 cri.go:89] found id: "6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:51.676422   71522 cri.go:89] found id: ""
	I0717 01:59:51.676432   71522 logs.go:276] 1 containers: [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a]
	I0717 01:59:51.676496   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.680974   71522 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:51.681043   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:51.720449   71522 cri.go:89] found id: "e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.720476   71522 cri.go:89] found id: ""
	I0717 01:59:51.720483   71522 logs.go:276] 1 containers: [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3]
	I0717 01:59:51.720527   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.724704   71522 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:51.724779   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:51.762892   71522 cri.go:89] found id: ""
	I0717 01:59:51.762923   71522 logs.go:276] 0 containers: []
	W0717 01:59:51.762932   71522 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:51.762939   71522 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 01:59:51.762986   71522 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 01:59:51.803675   71522 cri.go:89] found id: "e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.803702   71522 cri.go:89] found id: "abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.803707   71522 cri.go:89] found id: ""
	I0717 01:59:51.803714   71522 logs.go:276] 2 containers: [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299]
	I0717 01:59:51.803807   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.808188   71522 ssh_runner.go:195] Run: which crictl
	I0717 01:59:51.812046   71522 logs.go:123] Gathering logs for container status ...
	I0717 01:59:51.812065   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:51.855800   71522 logs.go:123] Gathering logs for kube-controller-manager [e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3] ...
	I0717 01:59:51.855832   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e6b826ba735619507b3e0446767f71099e0c247dc2adbfe8cec4ac30f87fbdf3"
	I0717 01:59:51.917804   71522 logs.go:123] Gathering logs for storage-provisioner [e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77] ...
	I0717 01:59:51.917833   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7c80efcec351174f9cd80e5186587d85f73fcb542accedefacfbe62a08cee77"
	I0717 01:59:51.958797   71522 logs.go:123] Gathering logs for storage-provisioner [abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299] ...
	I0717 01:59:51.958827   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abd3156233dd7990a5f8634a190216cd56fbd3f5347f24dd3617e146fe11b299"
	I0717 01:59:51.997003   71522 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:51.997034   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 01:59:52.118345   71522 logs.go:123] Gathering logs for kube-apiserver [3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82] ...
	I0717 01:59:52.118381   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d43ec5825cbc54240432a393a8be8eb7700eab1d9234831b702c1e953b8ba82"
	I0717 01:59:52.174308   71522 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:52.174344   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:52.578823   71522 logs.go:123] Gathering logs for coredns [92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7] ...
	I0717 01:59:52.578857   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92644b17d028a0082ee960a99f529930ab9eba5a07f776f4151349342e845ba7"
	I0717 01:59:52.619962   71522 logs.go:123] Gathering logs for coredns [4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013] ...
	I0717 01:59:52.619994   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d44ae996265f0f6d3303343658b4947d31b666e7bb77f8d4d1f877ea6a59013"
	I0717 01:59:52.667564   71522 logs.go:123] Gathering logs for kube-proxy [6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a] ...
	I0717 01:59:52.667593   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6945ab02cbf2a28fde618938e77d2d788b0f9420e62a823d54fe9773cef35e5a"
	I0717 01:59:52.714716   71522 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:52.714747   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:52.774123   71522 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:52.774171   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:52.788399   71522 logs.go:123] Gathering logs for etcd [5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9] ...
	I0717 01:59:52.788432   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5430044adf294121ada5eb44c1fe71b4ea5344be989efe07077d26a0afde6fb9"
	I0717 01:59:52.839796   71522 logs.go:123] Gathering logs for kube-scheduler [1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8] ...
	I0717 01:59:52.839828   71522 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a749b1143a7ae2544b94610c98b302a76fe3a928e3ee812091a647c3450b0f8"
	I0717 01:59:55.388404   71522 system_pods.go:59] 9 kube-system pods found
	I0717 01:59:55.388441   71522 system_pods.go:61] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.388448   71522 system_pods.go:61] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.388453   71522 system_pods.go:61] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.388458   71522 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.388465   71522 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.388469   71522 system_pods.go:61] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.388473   71522 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.388484   71522 system_pods.go:61] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.388491   71522 system_pods.go:61] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.388499   71522 system_pods.go:74] duration metric: took 3.93534618s to wait for pod list to return data ...
	I0717 01:59:55.388509   71522 default_sa.go:34] waiting for default service account to be created ...
	I0717 01:59:55.390798   71522 default_sa.go:45] found service account: "default"
	I0717 01:59:55.390829   71522 default_sa.go:55] duration metric: took 2.313714ms for default service account to be created ...
	I0717 01:59:55.390840   71522 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 01:59:55.399028   71522 system_pods.go:86] 9 kube-system pods found
	I0717 01:59:55.399049   71522 system_pods.go:89] "coredns-7db6d8ff4d-9w26c" [530f4d52-5fdc-47c4-8919-44430bf71e05] Running
	I0717 01:59:55.399054   71522 system_pods.go:89] "coredns-7db6d8ff4d-js7sn" [fe3951c5-d98d-4221-b71c-fc4f548b31d8] Running
	I0717 01:59:55.399059   71522 system_pods.go:89] "etcd-default-k8s-diff-port-738184" [a08737dd-a140-4c63-bf0f-9d3527d49de0] Running
	I0717 01:59:55.399063   71522 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-738184" [b24a4ca2-48f9-4603-b6e4-e3fb1ca58e40] Running
	I0717 01:59:55.399068   71522 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-738184" [dd62e27b-a3f1-4da6-b968-6be40743a8fd] Running
	I0717 01:59:55.399072   71522 system_pods.go:89] "kube-proxy-c4n94" [97eee4e8-4f36-412f-9064-57515ab6e932] Running
	I0717 01:59:55.399076   71522 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-738184" [eb87eec9-7533-4704-bd5a-7075d9d8c2f5] Running
	I0717 01:59:55.399083   71522 system_pods.go:89] "metrics-server-569cc877fc-gcjkt" [1859140e-a901-43c2-8c04-b4f8eb63e774] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 01:59:55.399090   71522 system_pods.go:89] "storage-provisioner" [b36904ec-ef3f-4aee-9276-fe1285e10876] Running
	I0717 01:59:55.399099   71522 system_pods.go:126] duration metric: took 8.253468ms to wait for k8s-apps to be running ...
	I0717 01:59:55.399108   71522 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 01:59:55.399152   71522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:59:55.417081   71522 system_svc.go:56] duration metric: took 17.965716ms WaitForService to wait for kubelet
	I0717 01:59:55.417109   71522 kubeadm.go:582] duration metric: took 4m21.36584166s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 01:59:55.417130   71522 node_conditions.go:102] verifying NodePressure condition ...
	I0717 01:59:55.420078   71522 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 01:59:55.420099   71522 node_conditions.go:123] node cpu capacity is 2
	I0717 01:59:55.420109   71522 node_conditions.go:105] duration metric: took 2.974324ms to run NodePressure ...
	I0717 01:59:55.420119   71522 start.go:241] waiting for startup goroutines ...
	I0717 01:59:55.420126   71522 start.go:246] waiting for cluster config update ...
	I0717 01:59:55.420136   71522 start.go:255] writing updated cluster config ...
	I0717 01:59:55.420406   71522 ssh_runner.go:195] Run: rm -f paused
	I0717 01:59:55.470793   71522 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 01:59:55.472960   71522 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-738184" cluster and "default" namespace by default
	I0717 01:59:53.036151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:53.049820   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:53.049879   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:53.087144   71929 cri.go:89] found id: ""
	I0717 01:59:53.087175   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.087189   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:53.087195   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:53.087253   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:53.123135   71929 cri.go:89] found id: ""
	I0717 01:59:53.123164   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.123175   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:53.123191   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:53.123254   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:53.157887   71929 cri.go:89] found id: ""
	I0717 01:59:53.157912   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.157922   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:53.157927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:53.158004   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:53.201002   71929 cri.go:89] found id: ""
	I0717 01:59:53.201033   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.201045   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:53.201054   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:53.201115   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:53.236159   71929 cri.go:89] found id: ""
	I0717 01:59:53.236188   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.236198   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:53.236204   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:53.236258   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:53.277585   71929 cri.go:89] found id: ""
	I0717 01:59:53.277616   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.277627   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:53.277634   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:53.277694   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:53.322722   71929 cri.go:89] found id: ""
	I0717 01:59:53.322747   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.322758   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:53.322765   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:53.322824   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:53.364112   71929 cri.go:89] found id: ""
	I0717 01:59:53.364138   71929 logs.go:276] 0 containers: []
	W0717 01:59:53.364149   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:53.364159   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:53.364172   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:53.418701   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:53.418739   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:53.435004   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:53.435030   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:53.511254   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:53.511274   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:53.511287   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:53.587967   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:53.588003   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:56.130773   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:56.144742   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:56.144811   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:56.180267   71929 cri.go:89] found id: ""
	I0717 01:59:56.180295   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.180306   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:56.180313   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:56.180373   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:56.217223   71929 cri.go:89] found id: ""
	I0717 01:59:56.217252   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.217263   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:56.217269   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:56.217334   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:56.251714   71929 cri.go:89] found id: ""
	I0717 01:59:56.251738   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.251745   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:56.251752   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:56.251805   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:56.292557   71929 cri.go:89] found id: ""
	I0717 01:59:56.292589   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.292597   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:56.292603   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:56.292653   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:56.332463   71929 cri.go:89] found id: ""
	I0717 01:59:56.332491   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.332501   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:56.332508   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:56.332562   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:56.372155   71929 cri.go:89] found id: ""
	I0717 01:59:56.372180   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.372189   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:56.372197   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:56.372255   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:56.415768   71929 cri.go:89] found id: ""
	I0717 01:59:56.415794   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.415806   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:56.415813   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:56.415871   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:56.456920   71929 cri.go:89] found id: ""
	I0717 01:59:56.456951   71929 logs.go:276] 0 containers: []
	W0717 01:59:56.456959   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:56.456968   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:56.456978   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:56.508932   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:56.508965   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:56.522496   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:56.522531   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:56.596839   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 01:59:56.596857   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:56.596870   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:56.679237   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:56.679271   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:53.303565   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:55.803725   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:57.806129   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:56.933245   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.432536   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 01:59:59.220084   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:59:59.233108   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 01:59:59.233182   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 01:59:59.266796   71929 cri.go:89] found id: ""
	I0717 01:59:59.266827   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.266838   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 01:59:59.266845   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 01:59:59.266909   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 01:59:59.297992   71929 cri.go:89] found id: ""
	I0717 01:59:59.298017   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.298026   71929 logs.go:278] No container was found matching "etcd"
	I0717 01:59:59.298032   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 01:59:59.298087   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 01:59:59.331953   71929 cri.go:89] found id: ""
	I0717 01:59:59.331982   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.331993   71929 logs.go:278] No container was found matching "coredns"
	I0717 01:59:59.331999   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 01:59:59.332069   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 01:59:59.368912   71929 cri.go:89] found id: ""
	I0717 01:59:59.368939   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.368948   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 01:59:59.368954   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 01:59:59.369002   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 01:59:59.402886   71929 cri.go:89] found id: ""
	I0717 01:59:59.402911   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.402920   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 01:59:59.402926   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 01:59:59.402982   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 01:59:59.441227   71929 cri.go:89] found id: ""
	I0717 01:59:59.441249   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.441257   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 01:59:59.441263   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 01:59:59.441322   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 01:59:59.479154   71929 cri.go:89] found id: ""
	I0717 01:59:59.479191   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.479213   71929 logs.go:278] No container was found matching "kindnet"
	I0717 01:59:59.479222   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 01:59:59.479286   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 01:59:59.516259   71929 cri.go:89] found id: ""
	I0717 01:59:59.516299   71929 logs.go:276] 0 containers: []
	W0717 01:59:59.516309   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 01:59:59.516319   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 01:59:59.516332   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 01:59:59.596352   71929 logs.go:123] Gathering logs for container status ...
	I0717 01:59:59.596385   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 01:59:59.639712   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 01:59:59.639744   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 01:59:59.691399   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 01:59:59.691444   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 01:59:59.706618   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 01:59:59.706648   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 01:59:59.778875   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.279246   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:02.293212   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:02.293284   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:02.330759   71929 cri.go:89] found id: ""
	I0717 02:00:02.330786   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.330795   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:02.330800   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:02.330848   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:02.366257   71929 cri.go:89] found id: ""
	I0717 02:00:02.366287   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.366298   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:02.366305   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:02.366368   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:00.303868   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.311063   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:01.432671   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:03.433059   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:02.404321   71929 cri.go:89] found id: ""
	I0717 02:00:02.404348   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.404358   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:02.404364   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:02.404432   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:02.444297   71929 cri.go:89] found id: ""
	I0717 02:00:02.444326   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.444342   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:02.444349   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:02.444406   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:02.478433   71929 cri.go:89] found id: ""
	I0717 02:00:02.478466   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.478477   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:02.478483   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:02.478530   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:02.515519   71929 cri.go:89] found id: ""
	I0717 02:00:02.515551   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.515560   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:02.515566   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:02.515618   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:02.551006   71929 cri.go:89] found id: ""
	I0717 02:00:02.551030   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.551038   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:02.551044   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:02.551110   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:02.588312   71929 cri.go:89] found id: ""
	I0717 02:00:02.588345   71929 logs.go:276] 0 containers: []
	W0717 02:00:02.588356   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:02.588367   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:02.588381   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:02.641900   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:02.641932   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:02.656851   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:02.656896   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:02.728286   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:02.728315   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:02.728327   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:02.806807   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:02.806847   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.355196   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:05.369148   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:05.369231   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:05.405012   71929 cri.go:89] found id: ""
	I0717 02:00:05.405045   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.405057   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:05.405068   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:05.405132   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:05.450524   71929 cri.go:89] found id: ""
	I0717 02:00:05.450564   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.450575   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:05.450582   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:05.450637   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:05.487503   71929 cri.go:89] found id: ""
	I0717 02:00:05.487533   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.487544   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:05.487553   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:05.487634   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:05.522607   71929 cri.go:89] found id: ""
	I0717 02:00:05.522635   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.522650   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:05.522656   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:05.522703   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:05.558091   71929 cri.go:89] found id: ""
	I0717 02:00:05.558120   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.558131   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:05.558138   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:05.558192   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:05.594540   71929 cri.go:89] found id: ""
	I0717 02:00:05.594587   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.594598   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:05.594605   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:05.594668   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:05.631783   71929 cri.go:89] found id: ""
	I0717 02:00:05.631807   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.631818   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:05.631825   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:05.631886   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:05.667494   71929 cri.go:89] found id: ""
	I0717 02:00:05.667523   71929 logs.go:276] 0 containers: []
	W0717 02:00:05.667532   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:05.667543   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:05.667559   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:05.681348   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:05.681373   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:05.747143   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:05.747165   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:05.747176   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:05.829639   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:05.829674   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:05.881984   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:05.882013   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:04.803913   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.302068   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:05.434869   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:07.435174   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:09.931879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:08.435873   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:08.449840   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:08.449901   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:08.489613   71929 cri.go:89] found id: ""
	I0717 02:00:08.489663   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.489675   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:00:08.489684   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:08.489751   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:08.526604   71929 cri.go:89] found id: ""
	I0717 02:00:08.526635   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.526645   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:00:08.526660   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:08.526717   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:08.563202   71929 cri.go:89] found id: ""
	I0717 02:00:08.563227   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.563234   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:00:08.563240   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:08.563299   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:08.598336   71929 cri.go:89] found id: ""
	I0717 02:00:08.598365   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.598376   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:00:08.598383   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:08.598441   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:08.632626   71929 cri.go:89] found id: ""
	I0717 02:00:08.632660   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.632671   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:00:08.632678   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:08.632739   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:08.667951   71929 cri.go:89] found id: ""
	I0717 02:00:08.667977   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.667993   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:00:08.668001   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:08.668059   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:08.702106   71929 cri.go:89] found id: ""
	I0717 02:00:08.702135   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.702146   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:08.702153   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:00:08.702212   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:00:08.733469   71929 cri.go:89] found id: ""
	I0717 02:00:08.733491   71929 logs.go:276] 0 containers: []
	W0717 02:00:08.733499   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:00:08.733508   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:08.733518   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:08.787930   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:08.787966   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:08.802761   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:08.802795   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:00:08.878115   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:00:08.878138   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:08.878149   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:08.962509   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:00:08.962543   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:11.503151   71929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:11.518019   71929 kubeadm.go:597] duration metric: took 4m3.576613508s to restartPrimaryControlPlane
	W0717 02:00:11.518087   71929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:00:11.518113   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:00:11.970514   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:11.986794   71929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:00:11.997382   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:00:12.006789   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:00:12.006816   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:00:12.006867   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:00:12.015864   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:00:12.015921   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:00:12.025239   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:00:12.034315   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:00:12.034373   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:00:12.043533   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.052344   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:00:12.052393   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:00:12.061290   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:00:12.070311   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:00:12.070375   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:00:12.080404   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:00:12.318084   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:00:09.303502   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.803893   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:11.933539   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:14.433949   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:13.804007   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.303079   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:16.932416   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.932721   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:18.303306   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:20.306811   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:22.803374   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:21.433157   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:23.433283   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:24.805822   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.301985   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:25.931740   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:27.934346   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:29.302199   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:31.302607   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:30.433033   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:32.434743   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:34.933166   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:33.802140   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:35.803338   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:36.933672   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:39.432879   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:38.302050   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:40.803322   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:41.932491   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:44.436201   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:43.302028   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:45.801979   71146 pod_ready.go:102] pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.303644   71146 pod_ready.go:81] duration metric: took 4m0.007411484s for pod "metrics-server-569cc877fc-rhp7b" in "kube-system" namespace to be "Ready" ...
	E0717 02:00:47.303668   71146 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 02:00:47.303678   71146 pod_ready.go:38] duration metric: took 4m7.053721739s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:00:47.303694   71146 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:00:47.303725   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:47.303791   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:47.365247   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:47.365272   71146 cri.go:89] found id: ""
	I0717 02:00:47.365279   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:47.365339   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.370201   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:47.370268   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:47.416627   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:47.416652   71146 cri.go:89] found id: ""
	I0717 02:00:47.416663   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:47.416731   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.421295   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:47.421454   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:47.463532   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.463556   71146 cri.go:89] found id: ""
	I0717 02:00:47.463564   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:47.463626   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.468291   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:47.468414   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:47.504328   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:47.504354   71146 cri.go:89] found id: ""
	I0717 02:00:47.504362   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:47.504445   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.508821   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:47.508880   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:47.550970   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.550996   71146 cri.go:89] found id: ""
	I0717 02:00:47.551006   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:47.551069   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.555974   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:47.556045   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:47.609884   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:47.609903   71146 cri.go:89] found id: ""
	I0717 02:00:47.609910   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:47.609968   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.615544   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:47.615603   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:47.653071   71146 cri.go:89] found id: ""
	I0717 02:00:47.653099   71146 logs.go:276] 0 containers: []
	W0717 02:00:47.653110   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:47.653117   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:47.653163   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:47.690462   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.690485   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:47.690490   71146 cri.go:89] found id: ""
	I0717 02:00:47.690498   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:47.690545   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.695196   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:47.699099   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:47.699117   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:47.816750   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:47.816782   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:46.932764   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:49.432402   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:47.869306   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:47.869341   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:47.906717   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:47.906755   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:47.944125   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:47.944152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:47.978632   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:47.978664   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:48.482628   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:48.482660   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:48.538252   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:48.538300   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:48.553011   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:48.553038   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:48.607632   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:48.607666   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:48.646122   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:48.646151   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:48.689948   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:48.689980   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:48.738285   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:48.738334   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.290996   71146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:00:51.308850   71146 api_server.go:72] duration metric: took 4m18.27461618s to wait for apiserver process to appear ...
	I0717 02:00:51.308873   71146 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:00:51.308907   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:51.308967   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:51.350827   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.350857   71146 cri.go:89] found id: ""
	I0717 02:00:51.350866   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:51.350930   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.355308   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:51.355369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:51.393804   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.393831   71146 cri.go:89] found id: ""
	I0717 02:00:51.393840   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:51.393897   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.398144   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:51.398201   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:51.437974   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:51.437991   71146 cri.go:89] found id: ""
	I0717 02:00:51.437998   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:51.438044   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.442318   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:51.442382   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:51.478462   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:51.478481   71146 cri.go:89] found id: ""
	I0717 02:00:51.478489   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:51.478534   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.482624   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:51.482672   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:51.526089   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.526114   71146 cri.go:89] found id: ""
	I0717 02:00:51.526123   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:51.526170   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.530855   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:51.530923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:51.568875   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.568899   71146 cri.go:89] found id: ""
	I0717 02:00:51.568908   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:51.568972   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.573300   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:51.573369   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:51.615775   71146 cri.go:89] found id: ""
	I0717 02:00:51.615800   71146 logs.go:276] 0 containers: []
	W0717 02:00:51.615809   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:51.615815   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:51.615876   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:51.658100   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:51.658124   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:51.658130   71146 cri.go:89] found id: ""
	I0717 02:00:51.658138   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:51.658183   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.663030   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:51.667348   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:51.667372   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:51.715502   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:51.715534   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:51.763431   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:51.763457   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:51.805523   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:51.805553   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:51.859660   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:51.859692   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:51.963831   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:51.963858   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:51.978152   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:51.978179   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:52.023897   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:52.023926   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:52.062193   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:52.062218   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:52.098487   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:52.098518   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:52.135733   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:52.135758   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:52.562245   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:52.562279   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:52.624258   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:52.624288   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:51.434060   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:53.933730   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:55.176270   71146 api_server.go:253] Checking apiserver healthz at https://192.168.72.225:8443/healthz ...
	I0717 02:00:55.180760   71146 api_server.go:279] https://192.168.72.225:8443/healthz returned 200:
	ok
	I0717 02:00:55.181928   71146 api_server.go:141] control plane version: v1.30.2
	I0717 02:00:55.181947   71146 api_server.go:131] duration metric: took 3.873068874s to wait for apiserver health ...
	I0717 02:00:55.181955   71146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:00:55.181975   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:00:55.182017   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:00:55.218028   71146 cri.go:89] found id: "ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:55.218059   71146 cri.go:89] found id: ""
	I0717 02:00:55.218068   71146 logs.go:276] 1 containers: [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509]
	I0717 02:00:55.218125   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.222841   71146 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:00:55.222911   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:00:55.265613   71146 cri.go:89] found id: "b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.265638   71146 cri.go:89] found id: ""
	I0717 02:00:55.265647   71146 logs.go:276] 1 containers: [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787]
	I0717 02:00:55.265699   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.269866   71146 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:00:55.269923   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:00:55.306363   71146 cri.go:89] found id: "110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:55.306390   71146 cri.go:89] found id: ""
	I0717 02:00:55.306400   71146 logs.go:276] 1 containers: [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783]
	I0717 02:00:55.306457   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.310843   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:00:55.310901   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:00:55.354417   71146 cri.go:89] found id: "211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:55.354439   71146 cri.go:89] found id: ""
	I0717 02:00:55.354449   71146 logs.go:276] 1 containers: [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745]
	I0717 02:00:55.354503   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.358988   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:00:55.359038   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:00:55.396457   71146 cri.go:89] found id: "0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.396480   71146 cri.go:89] found id: ""
	I0717 02:00:55.396488   71146 logs.go:276] 1 containers: [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de]
	I0717 02:00:55.396532   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.401185   71146 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:00:55.401244   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:00:55.438249   71146 cri.go:89] found id: "5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:55.438276   71146 cri.go:89] found id: ""
	I0717 02:00:55.438286   71146 logs.go:276] 1 containers: [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060]
	I0717 02:00:55.438344   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.442967   71146 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:00:55.443048   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:00:55.484173   71146 cri.go:89] found id: ""
	I0717 02:00:55.484197   71146 logs.go:276] 0 containers: []
	W0717 02:00:55.484205   71146 logs.go:278] No container was found matching "kindnet"
	I0717 02:00:55.484210   71146 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0717 02:00:55.484288   71146 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0717 02:00:55.525757   71146 cri.go:89] found id: "7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.525780   71146 cri.go:89] found id: "51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.525784   71146 cri.go:89] found id: ""
	I0717 02:00:55.525790   71146 logs.go:276] 2 containers: [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20]
	I0717 02:00:55.525842   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.530253   71146 ssh_runner.go:195] Run: which crictl
	I0717 02:00:55.534253   71146 logs.go:123] Gathering logs for etcd [b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787] ...
	I0717 02:00:55.534275   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1af0adb58a0bbd372d054995b8681bac59146b4ffe23d2af39a0898a9263787"
	I0717 02:00:55.578993   71146 logs.go:123] Gathering logs for kube-proxy [0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de] ...
	I0717 02:00:55.579018   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0012a63297ec64abbc52817a94e89d42b8c460dbe26c6deae8202e7fec0638de"
	I0717 02:00:55.622746   71146 logs.go:123] Gathering logs for storage-provisioner [7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465] ...
	I0717 02:00:55.622771   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7fac56f23fdf5aeceb000d329e4601626591d78f55cdb67a9b05eab925bf2465"
	I0717 02:00:55.660900   71146 logs.go:123] Gathering logs for storage-provisioner [51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20] ...
	I0717 02:00:55.660931   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51a6cb79762ec82977d01882a194d4ab336357e4038361e59d388d5546817b20"
	I0717 02:00:55.709803   71146 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:00:55.709833   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:00:56.092339   71146 logs.go:123] Gathering logs for kube-scheduler [211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745] ...
	I0717 02:00:56.092390   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 211063fd97af0e2e93c47f020780ac346b79c126a409756f80aa2e169c9f8745"
	I0717 02:00:56.130951   71146 logs.go:123] Gathering logs for kube-controller-manager [5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060] ...
	I0717 02:00:56.130976   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e124648f9a3736765b09673aba901545d876196158e17b227d395cc5980f060"
	I0717 02:00:56.186113   71146 logs.go:123] Gathering logs for container status ...
	I0717 02:00:56.186152   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:00:56.229794   71146 logs.go:123] Gathering logs for kubelet ...
	I0717 02:00:56.229839   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:00:56.285798   71146 logs.go:123] Gathering logs for dmesg ...
	I0717 02:00:56.285845   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:00:56.300391   71146 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:00:56.300421   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:00:56.425621   71146 logs.go:123] Gathering logs for kube-apiserver [ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509] ...
	I0717 02:00:56.425653   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffa398702fb316fa762ef5b0ac70758e22f02c5e7c5a285556c738035b8ea509"
	I0717 02:00:56.478853   71146 logs.go:123] Gathering logs for coredns [110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783] ...
	I0717 02:00:56.478882   71146 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 110368a2f3e57a3acf8074e972f322b8e7f1fda3f440ce8e4d6f331de8cdb783"
	I0717 02:00:59.026000   71146 system_pods.go:59] 8 kube-system pods found
	I0717 02:00:59.026028   71146 system_pods.go:61] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.026033   71146 system_pods.go:61] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.026036   71146 system_pods.go:61] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.026039   71146 system_pods.go:61] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.026042   71146 system_pods.go:61] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.026045   71146 system_pods.go:61] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.026051   71146 system_pods.go:61] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.026054   71146 system_pods.go:61] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.026062   71146 system_pods.go:74] duration metric: took 3.844102201s to wait for pod list to return data ...
	I0717 02:00:59.026069   71146 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:00:59.028810   71146 default_sa.go:45] found service account: "default"
	I0717 02:00:59.028831   71146 default_sa.go:55] duration metric: took 2.756364ms for default service account to be created ...
	I0717 02:00:59.028838   71146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:00:59.036427   71146 system_pods.go:86] 8 kube-system pods found
	I0717 02:00:59.036457   71146 system_pods.go:89] "coredns-7db6d8ff4d-wcw97" [0dd50538-f54d-43f1-bd8a-b9d3131c13f7] Running
	I0717 02:00:59.036466   71146 system_pods.go:89] "etcd-embed-certs-940222" [80a0a87b-4b27-4940-b86b-6fda4a9c5168] Running
	I0717 02:00:59.036474   71146 system_pods.go:89] "kube-apiserver-embed-certs-940222" [566417b4-efef-46b2-8826-e15b8559e35f] Running
	I0717 02:00:59.036482   71146 system_pods.go:89] "kube-controller-manager-embed-certs-940222" [0ec7f574-5ec3-401c-85a9-b9a9f6b7b979] Running
	I0717 02:00:59.036489   71146 system_pods.go:89] "kube-proxy-l58xk" [feae4e89-4900-4399-bd06-7d179280667d] Running
	I0717 02:00:59.036499   71146 system_pods.go:89] "kube-scheduler-embed-certs-940222" [d0e73061-6d3b-41fc-b2ed-e8c45e204d6a] Running
	I0717 02:00:59.036509   71146 system_pods.go:89] "metrics-server-569cc877fc-rhp7b" [07ffb1fa-240e-4c40-9ce4-93a1b51e179b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:00:59.036519   71146 system_pods.go:89] "storage-provisioner" [35aab5a5-6e1b-4572-aabe-a73fb1632252] Running
	I0717 02:00:59.036532   71146 system_pods.go:126] duration metric: took 7.688074ms to wait for k8s-apps to be running ...
	I0717 02:00:59.036542   71146 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:00:59.036594   71146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:00:59.052023   71146 system_svc.go:56] duration metric: took 15.474441ms WaitForService to wait for kubelet
	I0717 02:00:59.052049   71146 kubeadm.go:582] duration metric: took 4m26.017816269s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:00:59.052073   71146 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:00:59.054763   71146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:00:59.054784   71146 node_conditions.go:123] node cpu capacity is 2
	I0717 02:00:59.054795   71146 node_conditions.go:105] duration metric: took 2.714349ms to run NodePressure ...
	I0717 02:00:59.054805   71146 start.go:241] waiting for startup goroutines ...
	I0717 02:00:59.054811   71146 start.go:246] waiting for cluster config update ...
	I0717 02:00:59.054824   71146 start.go:255] writing updated cluster config ...
	I0717 02:00:59.055069   71146 ssh_runner.go:195] Run: rm -f paused
	I0717 02:00:59.101243   71146 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 02:00:59.103341   71146 out.go:177] * Done! kubectl is now configured to use "embed-certs-940222" cluster and "default" namespace by default
	I0717 02:00:56.432853   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:00:58.433589   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:00.932978   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:02.933289   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:05.433003   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:07.433470   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:09.433795   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:11.933112   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:14.433274   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:16.932102   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:18.932904   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:20.933023   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:23.433153   71603 pod_ready.go:102] pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace has status "Ready":"False"
	I0717 02:01:24.926132   71603 pod_ready.go:81] duration metric: took 4m0.000155151s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" ...
	E0717 02:01:24.926165   71603 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-g9x96" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 02:01:24.926185   71603 pod_ready.go:38] duration metric: took 4m39.916322674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:01:24.926214   71603 kubeadm.go:597] duration metric: took 5m27.432375382s to restartPrimaryControlPlane
	W0717 02:01:24.926303   71603 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 02:01:24.926339   71603 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:01:51.790820   71603 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.86445583s)
	I0717 02:01:51.790901   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:01:51.812968   71603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:01:51.835689   71603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:01:51.848832   71603 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:01:51.848859   71603 kubeadm.go:157] found existing configuration files:
	
	I0717 02:01:51.848911   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:01:51.876554   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:01:51.876620   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:01:51.891580   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:01:51.901279   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:01:51.901328   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:01:51.910994   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.920959   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:01:51.921020   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:01:51.931039   71603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:01:51.940496   71603 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:01:51.940549   71603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:01:51.950455   71603 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:01:51.999712   71603 kubeadm.go:310] W0717 02:01:51.966911    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.000573   71603 kubeadm.go:310] W0717 02:01:51.967749    3034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:01:52.132406   71603 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:02:01.065590   71603 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 02:02:01.065670   71603 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:01.065761   71603 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:01.065909   71603 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:01.066049   71603 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 02:02:01.066124   71603 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:01.067867   71603 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:01.067966   71603 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:01.068043   71603 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:01.068139   71603 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:01.068210   71603 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:01.068310   71603 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:01.068391   71603 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:01.068471   71603 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:01.068523   71603 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:01.068585   71603 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:01.068650   71603 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:01.068683   71603 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:01.068752   71603 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:01.068822   71603 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:01.068906   71603 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 02:02:01.068970   71603 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:01.069057   71603 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:01.069157   71603 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:01.069271   71603 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:01.069369   71603 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:01.070772   71603 out.go:204]   - Booting up control plane ...
	I0717 02:02:01.070883   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:01.070981   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:01.071088   71603 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:01.071206   71603 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:01.071311   71603 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:01.071365   71603 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:01.071497   71603 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 02:02:01.071557   71603 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 02:02:01.071608   71603 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.044041ms
	I0717 02:02:01.071663   71603 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 02:02:01.071725   71603 kubeadm.go:310] [api-check] The API server is healthy after 5.501034024s
	I0717 02:02:01.071823   71603 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 02:02:01.071926   71603 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 02:02:01.071975   71603 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 02:02:01.072168   71603 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-391501 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 02:02:01.072238   71603 kubeadm.go:310] [bootstrap-token] Using token: jhnlja.0tmcz1ce1lkie6op
	I0717 02:02:01.073965   71603 out.go:204]   - Configuring RBAC rules ...
	I0717 02:02:01.074091   71603 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 02:02:01.074223   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 02:02:01.074390   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 02:02:01.074597   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 02:02:01.074766   71603 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 02:02:01.074887   71603 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 02:02:01.075058   71603 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 02:02:01.075126   71603 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 02:02:01.075195   71603 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 02:02:01.075204   71603 kubeadm.go:310] 
	I0717 02:02:01.075255   71603 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 02:02:01.075262   71603 kubeadm.go:310] 
	I0717 02:02:01.075372   71603 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 02:02:01.075386   71603 kubeadm.go:310] 
	I0717 02:02:01.075419   71603 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 02:02:01.075498   71603 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 02:02:01.075582   71603 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 02:02:01.075604   71603 kubeadm.go:310] 
	I0717 02:02:01.075687   71603 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 02:02:01.075697   71603 kubeadm.go:310] 
	I0717 02:02:01.075759   71603 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 02:02:01.075771   71603 kubeadm.go:310] 
	I0717 02:02:01.075834   71603 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 02:02:01.075936   71603 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 02:02:01.076034   71603 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 02:02:01.076043   71603 kubeadm.go:310] 
	I0717 02:02:01.076142   71603 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 02:02:01.076248   71603 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 02:02:01.076256   71603 kubeadm.go:310] 
	I0717 02:02:01.076379   71603 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076541   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 \
	I0717 02:02:01.076582   71603 kubeadm.go:310] 	--control-plane 
	I0717 02:02:01.076600   71603 kubeadm.go:310] 
	I0717 02:02:01.076708   71603 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 02:02:01.076719   71603 kubeadm.go:310] 
	I0717 02:02:01.076819   71603 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jhnlja.0tmcz1ce1lkie6op \
	I0717 02:02:01.076955   71603 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c106fa53fe39228066a4d6d0a3d1523262a277fcc4b4de2aef480ed92843f134 
	I0717 02:02:01.076972   71603 cni.go:84] Creating CNI manager for ""
	I0717 02:02:01.076981   71603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 02:02:01.078801   71603 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 02:02:01.080151   71603 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 02:02:01.093210   71603 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 02:02:01.116656   71603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 02:02:01.116712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.116756   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-391501 minikube.k8s.io/updated_at=2024_07_17T02_02_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=no-preload-391501 minikube.k8s.io/primary=true
	I0717 02:02:01.314407   71603 ops.go:34] apiserver oom_adj: -16
	I0717 02:02:01.314467   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:01.814693   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.315439   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:02.814676   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.314734   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:03.814702   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.315450   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:04.815112   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.315144   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.814712   71603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 02:02:05.921356   71603 kubeadm.go:1113] duration metric: took 4.80469441s to wait for elevateKubeSystemPrivileges
	I0717 02:02:05.921398   71603 kubeadm.go:394] duration metric: took 6m8.48278775s to StartCluster
	I0717 02:02:05.921420   71603 settings.go:142] acquiring lock: {Name:mk284e98550e98148329d8d8958a44beebf5635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.921508   71603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 02:02:05.923844   71603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/kubeconfig: {Name:mkbd352a7a061ddbaff97c6e3cec9014a1becb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:02:05.924156   71603 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.174 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 02:02:05.924254   71603 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 02:02:05.924328   71603 addons.go:69] Setting storage-provisioner=true in profile "no-preload-391501"
	I0717 02:02:05.924357   71603 addons.go:234] Setting addon storage-provisioner=true in "no-preload-391501"
	I0717 02:02:05.924355   71603 addons.go:69] Setting default-storageclass=true in profile "no-preload-391501"
	I0717 02:02:05.924364   71603 addons.go:69] Setting metrics-server=true in profile "no-preload-391501"
	I0717 02:02:05.924391   71603 config.go:182] Loaded profile config "no-preload-391501": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:02:05.924398   71603 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-391501"
	I0717 02:02:05.924404   71603 addons.go:234] Setting addon metrics-server=true in "no-preload-391501"
	W0717 02:02:05.924414   71603 addons.go:243] addon metrics-server should already be in state true
	W0717 02:02:05.924368   71603 addons.go:243] addon storage-provisioner should already be in state true
	I0717 02:02:05.924447   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924460   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.924801   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924827   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924834   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924850   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.924874   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.924912   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.926034   71603 out.go:177] * Verifying Kubernetes components...
	I0717 02:02:05.927316   71603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:02:05.941502   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0717 02:02:05.941716   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0717 02:02:05.941969   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942299   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.942492   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942509   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942873   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.942902   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.942933   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943175   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.943250   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.943555   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0717 02:02:05.943829   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.943862   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.943996   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.944648   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.944672   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.945037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.945577   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.945613   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.947058   71603 addons.go:234] Setting addon default-storageclass=true in "no-preload-391501"
	W0717 02:02:05.947076   71603 addons.go:243] addon default-storageclass should already be in state true
	I0717 02:02:05.947103   71603 host.go:66] Checking if "no-preload-391501" exists ...
	I0717 02:02:05.947419   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.947447   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.960183   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0717 02:02:05.960662   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.961220   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.961249   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.961532   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.961777   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.962531   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I0717 02:02:05.963063   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.964115   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.964120   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.964146   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.965195   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.965777   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42381
	I0717 02:02:05.965802   71603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:02:05.965845   71603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:02:05.966114   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.966615   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.966635   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.966706   71603 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 02:02:05.967037   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.967228   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.968069   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 02:02:05.968101   71603 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 02:02:05.968121   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.969421   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.971055   71603 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 02:02:05.972019   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.972494   71603 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:05.972515   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 02:02:05.972533   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.972622   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.972646   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.973122   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.973289   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.973415   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.973638   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.975702   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976091   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.976110   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.976376   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.976553   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.976717   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.976866   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:05.983061   71603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44967
	I0717 02:02:05.983397   71603 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:02:05.983851   71603 main.go:141] libmachine: Using API Version  1
	I0717 02:02:05.983867   71603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:02:05.984150   71603 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:02:05.984319   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetState
	I0717 02:02:05.985757   71603 main.go:141] libmachine: (no-preload-391501) Calling .DriverName
	I0717 02:02:05.985973   71603 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:05.985985   71603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 02:02:05.986000   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHHostname
	I0717 02:02:05.989238   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989627   71603 main.go:141] libmachine: (no-preload-391501) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:6b:1b", ip: ""} in network mk-no-preload-391501: {Iface:virbr2 ExpiryTime:2024-07-17 02:55:30 +0000 UTC Type:0 Mac:52:54:00:e6:6b:1b Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:no-preload-391501 Clientid:01:52:54:00:e6:6b:1b}
	I0717 02:02:05.989647   71603 main.go:141] libmachine: (no-preload-391501) DBG | domain no-preload-391501 has defined IP address 192.168.61.174 and MAC address 52:54:00:e6:6b:1b in network mk-no-preload-391501
	I0717 02:02:05.989890   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHPort
	I0717 02:02:05.990056   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHKeyPath
	I0717 02:02:05.990212   71603 main.go:141] libmachine: (no-preload-391501) Calling .GetSSHUsername
	I0717 02:02:05.990412   71603 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/no-preload-391501/id_rsa Username:docker}
	I0717 02:02:06.238449   71603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 02:02:06.272217   71603 node_ready.go:35] waiting up to 6m0s for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281012   71603 node_ready.go:49] node "no-preload-391501" has status "Ready":"True"
	I0717 02:02:06.281031   71603 node_ready.go:38] duration metric: took 8.787329ms for node "no-preload-391501" to be "Ready" ...
	I0717 02:02:06.281040   71603 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:06.297250   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:06.386971   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 02:02:06.386995   71603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 02:02:06.439822   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:02:06.460362   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 02:02:06.460391   71603 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 02:02:06.468640   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:02:06.551454   71603 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:06.551482   71603 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 02:02:06.727518   71603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:02:07.338701   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338778   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.338874   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.338900   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339119   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339217   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339230   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339273   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339291   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339301   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339314   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339240   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.339386   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.339575   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339592   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.339648   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.339711   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.339736   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.357948   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.357966   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.358197   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.358212   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.694612   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.694690   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695028   71603 main.go:141] libmachine: (no-preload-391501) DBG | Closing plugin on server side
	I0717 02:02:07.695109   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695122   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695148   71603 main.go:141] libmachine: Making call to close driver server
	I0717 02:02:07.695160   71603 main.go:141] libmachine: (no-preload-391501) Calling .Close
	I0717 02:02:07.695404   71603 main.go:141] libmachine: Successfully made call to close driver server
	I0717 02:02:07.695421   71603 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 02:02:07.695432   71603 addons.go:475] Verifying addon metrics-server=true in "no-preload-391501"
	I0717 02:02:07.698298   71603 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 02:02:08.622411   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:02:08.622531   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:02:08.624111   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:08.624168   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:08.624265   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:08.624391   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:08.624526   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:08.624604   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:08.626394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:08.626478   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:08.626574   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:08.626657   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:08.626735   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:08.626830   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:08.626909   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:08.627001   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:08.627095   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:08.627203   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:08.627325   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:08.627392   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:08.627469   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:08.627573   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:08.627663   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:08.627753   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:08.627836   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:08.627997   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:08.628107   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:08.628179   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:08.628272   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:08.630262   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:08.630372   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:08.630477   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:08.630594   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:08.630729   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:08.630960   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:08.631020   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:08.631099   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631293   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631394   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631648   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.631748   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.631925   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632050   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632253   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632327   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:08.632528   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:08.632546   71929 kubeadm.go:310] 
	I0717 02:02:08.632611   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:02:08.632671   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:02:08.632689   71929 kubeadm.go:310] 
	I0717 02:02:08.632729   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:02:08.632772   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:02:08.632902   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:02:08.632914   71929 kubeadm.go:310] 
	I0717 02:02:08.633001   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:02:08.633030   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:02:08.633075   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:02:08.633092   71929 kubeadm.go:310] 
	I0717 02:02:08.633204   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:02:08.633281   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:02:08.633306   71929 kubeadm.go:310] 
	I0717 02:02:08.633450   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:02:08.633535   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:02:08.633597   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:02:08.633668   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:02:08.633697   71929 kubeadm.go:310] 
	W0717 02:02:08.633780   71929 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 02:02:08.633821   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 02:02:09.101394   71929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:09.119918   71929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:02:09.130974   71929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:02:09.131002   71929 kubeadm.go:157] found existing configuration files:
	
	I0717 02:02:09.131046   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:02:09.142720   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:02:09.142790   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:02:09.154990   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:02:09.166317   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:02:09.166379   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:02:09.176756   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.186639   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:02:09.186697   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:02:09.196778   71929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:02:09.206420   71929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:02:09.206469   71929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:02:09.216325   71929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 02:02:09.293311   71929 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 02:02:09.293457   71929 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 02:02:09.442386   71929 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 02:02:09.442594   71929 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 02:02:09.442736   71929 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 02:02:09.618387   71929 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 02:02:07.699645   71603 addons.go:510] duration metric: took 1.775390854s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 02:02:08.305410   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:09.620394   71929 out.go:204]   - Generating certificates and keys ...
	I0717 02:02:09.620496   71929 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 02:02:09.620593   71929 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 02:02:09.620691   71929 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 02:02:09.620791   71929 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 02:02:09.620909   71929 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 02:02:09.621004   71929 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 02:02:09.621117   71929 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 02:02:09.621364   71929 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 02:02:09.621778   71929 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 02:02:09.622072   71929 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 02:02:09.622135   71929 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 02:02:09.622225   71929 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 02:02:09.990964   71929 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 02:02:10.434990   71929 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 02:02:10.579785   71929 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 02:02:10.723319   71929 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 02:02:10.746923   71929 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 02:02:10.748370   71929 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 02:02:10.748460   71929 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 02:02:10.888855   71929 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 02:02:10.890727   71929 out.go:204]   - Booting up control plane ...
	I0717 02:02:10.890860   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 02:02:10.893530   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 02:02:10.894934   71929 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 02:02:10.896825   71929 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 02:02:10.899127   71929 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 02:02:10.806868   71603 pod_ready.go:102] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"False"
	I0717 02:02:12.804727   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.804754   71603 pod_ready.go:81] duration metric: took 6.507471417s for pod "coredns-5cfdc65f69-5lstd" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.804763   71603 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812383   71603 pod_ready.go:92] pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:12.812408   71603 pod_ready.go:81] duration metric: took 7.638012ms for pod "coredns-5cfdc65f69-tn5jv" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:12.812420   71603 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320241   71603 pod_ready.go:92] pod "etcd-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.320263   71603 pod_ready.go:81] duration metric: took 507.836128ms for pod "etcd-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.320285   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326308   71603 pod_ready.go:92] pod "kube-apiserver-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.326332   71603 pod_ready.go:81] duration metric: took 6.041207ms for pod "kube-apiserver-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.326341   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331310   71603 pod_ready.go:92] pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.331338   71603 pod_ready.go:81] duration metric: took 4.988207ms for pod "kube-controller-manager-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.331360   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602634   71603 pod_ready.go:92] pod "kube-proxy-gl7th" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:13.602677   71603 pod_ready.go:81] duration metric: took 271.310877ms for pod "kube-proxy-gl7th" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:13.602687   71603 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002256   71603 pod_ready.go:92] pod "kube-scheduler-no-preload-391501" in "kube-system" namespace has status "Ready":"True"
	I0717 02:02:14.002282   71603 pod_ready.go:81] duration metric: took 399.588324ms for pod "kube-scheduler-no-preload-391501" in "kube-system" namespace to be "Ready" ...
	I0717 02:02:14.002290   71603 pod_ready.go:38] duration metric: took 7.721240931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:02:14.002306   71603 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:02:14.002355   71603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:02:14.016981   71603 api_server.go:72] duration metric: took 8.092789001s to wait for apiserver process to appear ...
	I0717 02:02:14.017007   71603 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:02:14.017026   71603 api_server.go:253] Checking apiserver healthz at https://192.168.61.174:8443/healthz ...
	I0717 02:02:14.022008   71603 api_server.go:279] https://192.168.61.174:8443/healthz returned 200:
	ok
	I0717 02:02:14.022992   71603 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 02:02:14.023010   71603 api_server.go:131] duration metric: took 5.997297ms to wait for apiserver health ...
	I0717 02:02:14.023016   71603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 02:02:14.204777   71603 system_pods.go:59] 9 kube-system pods found
	I0717 02:02:14.204806   71603 system_pods.go:61] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.204811   71603 system_pods.go:61] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.204816   71603 system_pods.go:61] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.204819   71603 system_pods.go:61] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.204823   71603 system_pods.go:61] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.204826   71603 system_pods.go:61] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.204829   71603 system_pods.go:61] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.204836   71603 system_pods.go:61] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.204839   71603 system_pods.go:61] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.204847   71603 system_pods.go:74] duration metric: took 181.825073ms to wait for pod list to return data ...
	I0717 02:02:14.204854   71603 default_sa.go:34] waiting for default service account to be created ...
	I0717 02:02:14.402964   71603 default_sa.go:45] found service account: "default"
	I0717 02:02:14.402992   71603 default_sa.go:55] duration metric: took 198.131224ms for default service account to be created ...
	I0717 02:02:14.403005   71603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 02:02:14.606371   71603 system_pods.go:86] 9 kube-system pods found
	I0717 02:02:14.606408   71603 system_pods.go:89] "coredns-5cfdc65f69-5lstd" [71b74210-7395-4a48-8e1b-b49fb2faea43] Running
	I0717 02:02:14.606418   71603 system_pods.go:89] "coredns-5cfdc65f69-tn5jv" [482276d3-bfe2-4538-9dfe-a2a87a02182c] Running
	I0717 02:02:14.606424   71603 system_pods.go:89] "etcd-no-preload-391501" [c13d6752-3152-45e7-b2b9-a5275a4b42c5] Running
	I0717 02:02:14.606430   71603 system_pods.go:89] "kube-apiserver-no-preload-391501" [ba1d9920-dcaa-48d2-887b-f476d874d9ea] Running
	I0717 02:02:14.606438   71603 system_pods.go:89] "kube-controller-manager-no-preload-391501" [5e1e6aec-31b9-4b7c-a59b-f39a73b2e9a3] Running
	I0717 02:02:14.606444   71603 system_pods.go:89] "kube-proxy-gl7th" [320d9fae-f5b8-47bd-afc0-88e07e23157a] Running
	I0717 02:02:14.606450   71603 system_pods.go:89] "kube-scheduler-no-preload-391501" [a091b866-df88-4b9b-8893-bc6022704680] Running
	I0717 02:02:14.606461   71603 system_pods.go:89] "metrics-server-78fcd8795b-tnrht" [af70d47e-8e45-4e5d-bceb-e01a6c1851ff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 02:02:14.606474   71603 system_pods.go:89] "storage-provisioner" [742baa9b-d48e-4be9-8c33-64d42e1ff169] Running
	I0717 02:02:14.606486   71603 system_pods.go:126] duration metric: took 203.473728ms to wait for k8s-apps to be running ...
	I0717 02:02:14.606497   71603 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 02:02:14.606568   71603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:02:14.622178   71603 system_svc.go:56] duration metric: took 15.671962ms WaitForService to wait for kubelet
	I0717 02:02:14.622211   71603 kubeadm.go:582] duration metric: took 8.698021688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:02:14.622234   71603 node_conditions.go:102] verifying NodePressure condition ...
	I0717 02:02:14.802282   71603 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 02:02:14.802309   71603 node_conditions.go:123] node cpu capacity is 2
	I0717 02:02:14.802319   71603 node_conditions.go:105] duration metric: took 180.080727ms to run NodePressure ...
	I0717 02:02:14.802330   71603 start.go:241] waiting for startup goroutines ...
	I0717 02:02:14.802337   71603 start.go:246] waiting for cluster config update ...
	I0717 02:02:14.802345   71603 start.go:255] writing updated cluster config ...
	I0717 02:02:14.802613   71603 ssh_runner.go:195] Run: rm -f paused
	I0717 02:02:14.848725   71603 start.go:600] kubectl: 1.30.2, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 02:02:14.850965   71603 out.go:177] * Done! kubectl is now configured to use "no-preload-391501" cluster and "default" namespace by default
	I0717 02:02:50.900829   71929 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 02:02:50.901350   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:50.901626   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:02:55.902558   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:02:55.902805   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:05.903753   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:05.904033   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:03:25.905383   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:03:25.905597   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906576   71929 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 02:04:05.906960   71929 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 02:04:05.906992   71929 kubeadm.go:310] 
	I0717 02:04:05.907049   71929 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 02:04:05.907133   71929 kubeadm.go:310] 		timed out waiting for the condition
	I0717 02:04:05.907182   71929 kubeadm.go:310] 
	I0717 02:04:05.907252   71929 kubeadm.go:310] 	This error is likely caused by:
	I0717 02:04:05.907339   71929 kubeadm.go:310] 		- The kubelet is not running
	I0717 02:04:05.907516   71929 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 02:04:05.907529   71929 kubeadm.go:310] 
	I0717 02:04:05.907661   71929 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 02:04:05.907699   71929 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 02:04:05.907743   71929 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 02:04:05.907751   71929 kubeadm.go:310] 
	I0717 02:04:05.907907   71929 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 02:04:05.908043   71929 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 02:04:05.908053   71929 kubeadm.go:310] 
	I0717 02:04:05.908221   71929 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 02:04:05.908435   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 02:04:05.908619   71929 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 02:04:05.908738   71929 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 02:04:05.908788   71929 kubeadm.go:310] 
	I0717 02:04:05.909079   71929 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:04:05.909286   71929 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 02:04:05.909452   71929 kubeadm.go:394] duration metric: took 7m58.01930975s to StartCluster
	I0717 02:04:05.909455   71929 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 02:04:05.909494   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 02:04:05.909552   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 02:04:05.952911   71929 cri.go:89] found id: ""
	I0717 02:04:05.952937   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.952949   71929 logs.go:278] No container was found matching "kube-apiserver"
	I0717 02:04:05.952957   71929 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 02:04:05.953026   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 02:04:05.988490   71929 cri.go:89] found id: ""
	I0717 02:04:05.988518   71929 logs.go:276] 0 containers: []
	W0717 02:04:05.988529   71929 logs.go:278] No container was found matching "etcd"
	I0717 02:04:05.988537   71929 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 02:04:05.988593   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 02:04:06.025228   71929 cri.go:89] found id: ""
	I0717 02:04:06.025259   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.025269   71929 logs.go:278] No container was found matching "coredns"
	I0717 02:04:06.025277   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 02:04:06.025342   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 02:04:06.060563   71929 cri.go:89] found id: ""
	I0717 02:04:06.060589   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.060599   71929 logs.go:278] No container was found matching "kube-scheduler"
	I0717 02:04:06.060604   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 02:04:06.060660   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 02:04:06.095051   71929 cri.go:89] found id: ""
	I0717 02:04:06.095079   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.095091   71929 logs.go:278] No container was found matching "kube-proxy"
	I0717 02:04:06.095099   71929 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 02:04:06.095150   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 02:04:06.131892   71929 cri.go:89] found id: ""
	I0717 02:04:06.131914   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.131921   71929 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 02:04:06.131927   71929 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 02:04:06.131973   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 02:04:06.168893   71929 cri.go:89] found id: ""
	I0717 02:04:06.168919   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.168930   71929 logs.go:278] No container was found matching "kindnet"
	I0717 02:04:06.168937   71929 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 02:04:06.168995   71929 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 02:04:06.206635   71929 cri.go:89] found id: ""
	I0717 02:04:06.206658   71929 logs.go:276] 0 containers: []
	W0717 02:04:06.206668   71929 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 02:04:06.206679   71929 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:04:06.206693   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 02:04:06.308601   71929 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 02:04:06.308624   71929 logs.go:123] Gathering logs for CRI-O ...
	I0717 02:04:06.308637   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 02:04:06.422081   71929 logs.go:123] Gathering logs for container status ...
	I0717 02:04:06.422116   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:04:06.467466   71929 logs.go:123] Gathering logs for kubelet ...
	I0717 02:04:06.467496   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 02:04:06.521420   71929 logs.go:123] Gathering logs for dmesg ...
	I0717 02:04:06.521457   71929 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0717 02:04:06.535167   71929 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 02:04:06.535211   71929 out.go:239] * 
	W0717 02:04:06.535263   71929 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.535292   71929 out.go:239] * 
	W0717 02:04:06.536098   71929 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:04:06.539314   71929 out.go:177] 
	W0717 02:04:06.540504   71929 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 02:04:06.540557   71929 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 02:04:06.540579   71929 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 02:04:06.541888   71929 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.624163242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182533624132187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5a69311-4c73-40d0-ba64-93358ae628fe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.625003139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73ade6e6-44ad-4624-a6bb-7a99b3374a8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.625055990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73ade6e6-44ad-4624-a6bb-7a99b3374a8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.625091414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=73ade6e6-44ad-4624-a6bb-7a99b3374a8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.656161896Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=14fe6d1c-a76c-4e5b-9443-92d873b529f2 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.656302202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=14fe6d1c-a76c-4e5b-9443-92d873b529f2 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.657111075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ee56ad0-fd2a-4916-b05b-d5380b42002d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.657572547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182533657533583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ee56ad0-fd2a-4916-b05b-d5380b42002d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.658323960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d9d9829-c1d0-447b-9ea9-a6d6cb9111a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.658372369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d9d9829-c1d0-447b-9ea9-a6d6cb9111a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.658416749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2d9d9829-c1d0-447b-9ea9-a6d6cb9111a1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.696181937Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2d762c1-e915-4058-9246-1d0721de82d4 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.696319929Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2d762c1-e915-4058-9246-1d0721de82d4 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.697427797Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41903857-1de4-4da3-9b7a-bd7096c2f455 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.697817665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182533697793747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41903857-1de4-4da3-9b7a-bd7096c2f455 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.698301268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=465ec1f9-a5b0-4c99-8c1d-57fff53b536a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.698367705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=465ec1f9-a5b0-4c99-8c1d-57fff53b536a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.698401698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=465ec1f9-a5b0-4c99-8c1d-57fff53b536a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.733487469Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7472b983-304c-44b1-9482-13b2e531e2f1 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.733604724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7472b983-304c-44b1-9482-13b2e531e2f1 name=/runtime.v1.RuntimeService/Version
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.734923007Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce2ffd35-ffa7-42c6-9c3b-d2e706d958de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.735342559Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721182533735320795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce2ffd35-ffa7-42c6-9c3b-d2e706d958de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.736024415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d6ca36b-7dfd-460a-a5bb-8f52878bddb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.736090320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d6ca36b-7dfd-460a-a5bb-8f52878bddb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 02:15:33 old-k8s-version-901761 crio[644]: time="2024-07-17 02:15:33.736125108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5d6ca36b-7dfd-460a-a5bb-8f52878bddb2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 01:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060095] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.053379] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.696762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.451232] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.600989] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.496189] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.063928] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058024] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.198095] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.159661] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.276256] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[Jul17 01:56] systemd-fstab-generator[830]: Ignoring "noauto" option for root device
	[  +0.060021] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.876226] systemd-fstab-generator[954]: Ignoring "noauto" option for root device
	[ +12.568707] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 02:00] systemd-fstab-generator[5018]: Ignoring "noauto" option for root device
	[Jul17 02:02] systemd-fstab-generator[5295]: Ignoring "noauto" option for root device
	[  +0.065589] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:15:33 up 19 min,  0 users,  load average: 0.15, 0.07, 0.01
	Linux old-k8s-version-901761 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0001aafc0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc00092e2d0, 0x24, 0x0, ...)
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]: net.(*Dialer).DialContext(0xc000c1e5a0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00092e2d0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c23b20, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00092e2d0, 0x24, 0x60, 0x7feb802ab718, 0x118, ...)
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]: net/http.(*Transport).dial(0xc00018d7c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00092e2d0, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]: net/http.(*Transport).dialConn(0xc00018d7c0, 0x4f7fe00, 0xc000052030, 0x0, 0xc00009f320, 0x5, 0xc00092e2d0, 0x24, 0x0, 0xc0004db0e0, ...)
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]: net/http.(*Transport).dialConnFor(0xc00018d7c0, 0xc000297970)
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]: created by net/http.(*Transport).queueForDial
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6797]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 17 02:15:31 old-k8s-version-901761 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 02:15:31 old-k8s-version-901761 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 02:15:31 old-k8s-version-901761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 139.
	Jul 17 02:15:31 old-k8s-version-901761 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 02:15:31 old-k8s-version-901761 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6807]: I0717 02:15:31.816786    6807 server.go:416] Version: v1.20.0
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6807]: I0717 02:15:31.817133    6807 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6807]: I0717 02:15:31.819082    6807 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6807]: W0717 02:15:31.820089    6807 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 02:15:31 old-k8s-version-901761 kubelet[6807]: I0717 02:15:31.820205    6807 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-901761 -n old-k8s-version-901761
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 2 (230.471197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-901761" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (141.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-386113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p newest-cni-386113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: exit status 80 (26.030579496s)

                                                
                                                
-- stdout --
	* [newest-cni-386113] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "newest-cni-386113" primary control-plane node in "newest-cni-386113" cluster
	* Restarting existing kvm2 VM for "newest-cni-386113" ...
	* Updating the running kvm2 "newest-cni-386113" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 02:16:33.664357   78861 out.go:291] Setting OutFile to fd 1 ...
	I0717 02:16:33.664449   78861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:16:33.664456   78861 out.go:304] Setting ErrFile to fd 2...
	I0717 02:16:33.664460   78861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:16:33.664627   78861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 02:16:33.665135   78861 out.go:298] Setting JSON to false
	I0717 02:16:33.665986   78861 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7136,"bootTime":1721175458,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 02:16:33.666038   78861 start.go:139] virtualization: kvm guest
	I0717 02:16:33.668138   78861 out.go:177] * [newest-cni-386113] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 02:16:33.669586   78861 notify.go:220] Checking for updates...
	I0717 02:16:33.669608   78861 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 02:16:33.671025   78861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 02:16:33.672727   78861 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 02:16:33.674166   78861 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 02:16:33.675622   78861 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 02:16:33.677043   78861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 02:16:33.678758   78861 config.go:182] Loaded profile config "newest-cni-386113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:16:33.679232   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:33.679275   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:33.694847   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33475
	I0717 02:16:33.695238   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:33.695845   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:33.695867   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:33.696161   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:33.696356   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:33.696601   78861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 02:16:33.696880   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:33.696919   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:33.711749   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0717 02:16:33.712173   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:33.712717   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:33.712735   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:33.713205   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:33.713446   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:33.749065   78861 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 02:16:33.750444   78861 start.go:297] selected driver: kvm2
	I0717 02:16:33.750456   78861 start.go:901] validating driver "kvm2" against &{Name:newest-cni-386113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-386113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:16:33.750577   78861 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 02:16:33.751254   78861 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 02:16:33.751314   78861 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 02:16:33.766259   78861 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 02:16:33.766639   78861 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 02:16:33.766666   78861 cni.go:84] Creating CNI manager for ""
	I0717 02:16:33.766673   78861 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 02:16:33.766710   78861 start.go:340] cluster config:
	{Name:newest-cni-386113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-386113 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.112 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:16:33.766806   78861 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 02:16:33.768749   78861 out.go:177] * Starting "newest-cni-386113" primary control-plane node in "newest-cni-386113" cluster
	I0717 02:16:33.769983   78861 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 02:16:33.770010   78861 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 02:16:33.770017   78861 cache.go:56] Caching tarball of preloaded images
	I0717 02:16:33.770097   78861 preload.go:172] Found /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 02:16:33.770111   78861 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 02:16:33.770204   78861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/newest-cni-386113/config.json ...
	I0717 02:16:33.770367   78861 start.go:360] acquireMachinesLock for newest-cni-386113: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 02:16:33.770407   78861 start.go:364] duration metric: took 22.027µs to acquireMachinesLock for "newest-cni-386113"
	I0717 02:16:33.770425   78861 start.go:96] Skipping create...Using existing machine configuration
	I0717 02:16:33.770433   78861 fix.go:54] fixHost starting: 
	I0717 02:16:33.770726   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:33.770771   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:33.787241   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I0717 02:16:33.787726   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:33.788321   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:33.788341   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:33.788689   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:33.788891   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:33.789067   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetState
	I0717 02:16:33.790614   78861 fix.go:112] recreateIfNeeded on newest-cni-386113: state=Stopped err=<nil>
	I0717 02:16:33.790649   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	W0717 02:16:33.790810   78861 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 02:16:33.793055   78861 out.go:177] * Restarting existing kvm2 VM for "newest-cni-386113" ...
	I0717 02:16:33.794666   78861 main.go:141] libmachine: (newest-cni-386113) Calling .Start
	I0717 02:16:33.794840   78861 main.go:141] libmachine: (newest-cni-386113) Ensuring networks are active...
	I0717 02:16:33.795550   78861 main.go:141] libmachine: (newest-cni-386113) Ensuring network default is active
	I0717 02:16:33.795910   78861 main.go:141] libmachine: (newest-cni-386113) Ensuring network mk-newest-cni-386113 is active
	I0717 02:16:33.796307   78861 main.go:141] libmachine: (newest-cni-386113) Getting domain xml...
	I0717 02:16:33.796893   78861 main.go:141] libmachine: (newest-cni-386113) Creating domain...
	I0717 02:16:35.003495   78861 main.go:141] libmachine: (newest-cni-386113) Waiting to get IP...
	I0717 02:16:35.004405   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:35.004811   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:35.004888   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:35.004821   78895 retry.go:31] will retry after 246.296142ms: waiting for machine to come up
	I0717 02:16:35.252230   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:35.252787   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:35.252828   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:35.252747   78895 retry.go:31] will retry after 319.046324ms: waiting for machine to come up
	I0717 02:16:35.573136   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:35.573533   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:35.573556   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:35.573490   78895 retry.go:31] will retry after 352.340084ms: waiting for machine to come up
	I0717 02:16:35.926908   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:35.927427   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:35.927456   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:35.927383   78895 retry.go:31] will retry after 420.053145ms: waiting for machine to come up
	I0717 02:16:36.349018   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:36.349474   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:36.349505   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:36.349418   78895 retry.go:31] will retry after 474.535661ms: waiting for machine to come up
	I0717 02:16:36.825920   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:36.826521   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:36.826544   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:36.826463   78895 retry.go:31] will retry after 862.224729ms: waiting for machine to come up
	I0717 02:16:37.690326   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:37.690972   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:37.690998   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:37.690810   78895 retry.go:31] will retry after 1.119857631s: waiting for machine to come up
	I0717 02:16:38.812589   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:38.814233   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:38.814264   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:38.814190   78895 retry.go:31] will retry after 1.132154413s: waiting for machine to come up
	I0717 02:16:39.947906   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:39.948356   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:39.948382   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:39.948317   78895 retry.go:31] will retry after 1.85893584s: waiting for machine to come up
	I0717 02:16:41.809508   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:41.810006   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:41.810034   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:41.809950   78895 retry.go:31] will retry after 1.472485012s: waiting for machine to come up
	I0717 02:16:43.284693   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:43.285226   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:43.285254   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:43.285185   78895 retry.go:31] will retry after 1.846125187s: waiting for machine to come up
	I0717 02:16:45.133096   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:45.133545   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:45.133574   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:45.133489   78895 retry.go:31] will retry after 2.958242893s: waiting for machine to come up
	I0717 02:16:48.092988   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:48.093437   78861 main.go:141] libmachine: (newest-cni-386113) DBG | unable to find current IP address of domain newest-cni-386113 in network mk-newest-cni-386113
	I0717 02:16:48.093477   78861 main.go:141] libmachine: (newest-cni-386113) DBG | I0717 02:16:48.093392   78895 retry.go:31] will retry after 4.488434068s: waiting for machine to come up
	I0717 02:16:52.583095   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.583530   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has current primary IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.583548   78861 main.go:141] libmachine: (newest-cni-386113) Found IP for machine: 192.168.50.112
	I0717 02:16:52.583562   78861 main.go:141] libmachine: (newest-cni-386113) Reserving static IP address...
	I0717 02:16:52.584040   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "newest-cni-386113", mac: "52:54:00:b3:8c:c1", ip: "192.168.50.112"} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.584059   78861 main.go:141] libmachine: (newest-cni-386113) Reserved static IP address: 192.168.50.112
	I0717 02:16:52.584071   78861 main.go:141] libmachine: (newest-cni-386113) DBG | skip adding static IP to network mk-newest-cni-386113 - found existing host DHCP lease matching {name: "newest-cni-386113", mac: "52:54:00:b3:8c:c1", ip: "192.168.50.112"}
	I0717 02:16:52.584081   78861 main.go:141] libmachine: (newest-cni-386113) DBG | Getting to WaitForSSH function...
	I0717 02:16:52.584090   78861 main.go:141] libmachine: (newest-cni-386113) Waiting for SSH to be available...
	I0717 02:16:52.586246   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.586535   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.586586   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.586669   78861 main.go:141] libmachine: (newest-cni-386113) DBG | Using SSH client type: external
	I0717 02:16:52.586695   78861 main.go:141] libmachine: (newest-cni-386113) DBG | Using SSH private key: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/id_rsa (-rw-------)
	I0717 02:16:52.586728   78861 main.go:141] libmachine: (newest-cni-386113) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 02:16:52.586740   78861 main.go:141] libmachine: (newest-cni-386113) DBG | About to run SSH command:
	I0717 02:16:52.586748   78861 main.go:141] libmachine: (newest-cni-386113) DBG | exit 0
	I0717 02:16:52.714731   78861 main.go:141] libmachine: (newest-cni-386113) DBG | SSH cmd err, output: <nil>: 
	I0717 02:16:52.715163   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetConfigRaw
	I0717 02:16:52.715809   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetIP
	I0717 02:16:52.718723   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.719123   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.719157   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.719415   78861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/newest-cni-386113/config.json ...
	I0717 02:16:52.719618   78861 machine.go:94] provisionDockerMachine start ...
	I0717 02:16:52.719636   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:52.719850   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:52.722376   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.722710   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.722737   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.722880   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:52.723032   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.723207   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.723312   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:52.723509   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:52.723798   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:52.723816   78861 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 02:16:52.834849   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 02:16:52.834874   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:52.835120   78861 buildroot.go:166] provisioning hostname "newest-cni-386113"
	I0717 02:16:52.835148   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:52.835338   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:52.837964   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.838286   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.838320   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.838412   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:52.838602   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.838751   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.838915   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:52.839087   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:52.839321   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:52.839340   78861 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-386113 && echo "newest-cni-386113" | sudo tee /etc/hostname
	I0717 02:16:52.965401   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-386113
	
	I0717 02:16:52.965428   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:52.968158   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.968461   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:52.968494   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:52.968636   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:52.968841   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.969057   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:52.969245   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:52.969409   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:52.969578   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:52.969593   78861 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-386113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-386113/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-386113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 02:16:53.092635   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 02:16:53.092661   78861 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 02:16:53.092716   78861 buildroot.go:174] setting up certificates
	I0717 02:16:53.092728   78861 provision.go:84] configureAuth start
	I0717 02:16:53.092738   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:53.093075   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetIP
	I0717 02:16:53.095810   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.096182   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:53.096207   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.096300   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:53.098626   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.098980   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:53.099009   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.099217   78861 provision.go:143] copyHostCerts
	I0717 02:16:53.099310   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 02:16:53.099326   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 02:16:53.099399   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 02:16:53.099528   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 02:16:53.099540   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 02:16:53.099586   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 02:16:53.099699   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 02:16:53.099708   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 02:16:53.099740   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 02:16:53.099822   78861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.newest-cni-386113 san=[127.0.0.1 192.168.50.112 localhost minikube newest-cni-386113]
	I0717 02:16:53.185051   78861 provision.go:177] copyRemoteCerts
	I0717 02:16:53.185125   78861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 02:16:53.185159   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:53.188300   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.188693   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:53.188726   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.188840   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:53.189035   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:53.189244   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:53.189409   78861 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/id_rsa Username:docker}
	I0717 02:16:53.277553   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 02:16:53.303495   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 02:16:53.330799   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 02:16:53.355782   78861 provision.go:87] duration metric: took 263.042459ms to configureAuth
	I0717 02:16:53.355810   78861 buildroot.go:189] setting minikube options for container-runtime
	I0717 02:16:53.356082   78861 config.go:182] Loaded profile config "newest-cni-386113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:16:53.356163   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:53.358608   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.358958   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:53.358987   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:53.359134   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:53.359315   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:53.359486   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:53.359618   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:53.359777   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:53.359926   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:53.359940   78861 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 02:16:53.540166   78861 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0717 02:16:53.540199   78861 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0717 02:16:53.540211   78861 machine.go:97] duration metric: took 820.581159ms to provisionDockerMachine
	I0717 02:16:53.540238   78861 fix.go:56] duration metric: took 19.769804127s for fixHost
	I0717 02:16:53.540246   78861 start.go:83] releasing machines lock for "newest-cni-386113", held for 19.769827497s
	W0717 02:16:53.540271   78861 start.go:714] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	W0717 02:16:53.540400   78861 out.go:239] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0717 02:16:53.540418   78861 start.go:729] Will try again in 5 seconds ...
	I0717 02:16:58.544971   78861 start.go:360] acquireMachinesLock for newest-cni-386113: {Name:mk3d74c1dda2b8a7af17fb95b766ad68098974c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 02:16:58.545077   78861 start.go:364] duration metric: took 51.633µs to acquireMachinesLock for "newest-cni-386113"
	I0717 02:16:58.545096   78861 start.go:96] Skipping create...Using existing machine configuration
	I0717 02:16:58.545104   78861 fix.go:54] fixHost starting: 
	I0717 02:16:58.545398   78861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:16:58.545429   78861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:16:58.560097   78861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33837
	I0717 02:16:58.560605   78861 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:16:58.561105   78861 main.go:141] libmachine: Using API Version  1
	I0717 02:16:58.561121   78861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:16:58.561427   78861 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:16:58.561629   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:58.561783   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetState
	I0717 02:16:58.563502   78861 fix.go:112] recreateIfNeeded on newest-cni-386113: state=Running err=<nil>
	W0717 02:16:58.563520   78861 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 02:16:58.565319   78861 out.go:177] * Updating the running kvm2 "newest-cni-386113" VM ...
	I0717 02:16:58.566697   78861 machine.go:94] provisionDockerMachine start ...
	I0717 02:16:58.566720   78861 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:16:58.566902   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:58.569308   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.569691   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:58.569718   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.569817   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:58.569965   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.570117   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.570227   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:58.570400   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:58.570567   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:58.570581   78861 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 02:16:58.687141   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-386113
	
	I0717 02:16:58.687172   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:58.687397   78861 buildroot.go:166] provisioning hostname "newest-cni-386113"
	I0717 02:16:58.687421   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:58.687600   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:58.689806   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.690088   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:58.690114   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.690287   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:58.690454   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.690586   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.690722   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:58.690888   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:58.691060   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:58.691076   78861 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-386113 && echo "newest-cni-386113" | sudo tee /etc/hostname
	I0717 02:16:58.817937   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-386113
	
	I0717 02:16:58.817988   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:58.820660   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.820992   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:58.821015   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.821204   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:58.821406   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.821555   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:58.821666   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:58.821804   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:58.821971   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:58.821986   78861 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-386113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-386113/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-386113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 02:16:58.939320   78861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 02:16:58.939348   78861 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19264-3908/.minikube CaCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19264-3908/.minikube}
	I0717 02:16:58.939369   78861 buildroot.go:174] setting up certificates
	I0717 02:16:58.939380   78861 provision.go:84] configureAuth start
	I0717 02:16:58.939391   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetMachineName
	I0717 02:16:58.939618   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetIP
	I0717 02:16:58.942050   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.942346   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:58.942373   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.942503   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:58.944829   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.945147   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:58.945173   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:58.945296   78861 provision.go:143] copyHostCerts
	I0717 02:16:58.945352   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem, removing ...
	I0717 02:16:58.945361   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem
	I0717 02:16:58.945413   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/key.pem (1675 bytes)
	I0717 02:16:58.945491   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem, removing ...
	I0717 02:16:58.945498   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem
	I0717 02:16:58.945517   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/ca.pem (1078 bytes)
	I0717 02:16:58.945563   78861 exec_runner.go:144] found /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem, removing ...
	I0717 02:16:58.945569   78861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem
	I0717 02:16:58.945586   78861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19264-3908/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19264-3908/.minikube/cert.pem (1123 bytes)
	I0717 02:16:58.945628   78861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca-key.pem org=jenkins.newest-cni-386113 san=[127.0.0.1 192.168.50.112 localhost minikube newest-cni-386113]
	I0717 02:16:59.295441   78861 provision.go:177] copyRemoteCerts
	I0717 02:16:59.295500   78861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 02:16:59.295538   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:59.298262   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:59.298616   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:59.298650   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:59.298814   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:59.299041   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:59.299242   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:59.299388   78861 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/id_rsa Username:docker}
	I0717 02:16:59.385173   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 02:16:59.409457   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 02:16:59.432649   78861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19264-3908/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 02:16:59.456653   78861 provision.go:87] duration metric: took 517.259084ms to configureAuth
	I0717 02:16:59.456687   78861 buildroot.go:189] setting minikube options for container-runtime
	I0717 02:16:59.456873   78861 config.go:182] Loaded profile config "newest-cni-386113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:16:59.456951   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:16:59.459938   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:59.460310   78861 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:16:59.460340   78861 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:16:59.460545   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:16:59.460743   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:59.460975   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:16:59.461135   78861 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:16:59.461322   78861 main.go:141] libmachine: Using SSH client type: native
	I0717 02:16:59.461487   78861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.112 22 <nil> <nil>}
	I0717 02:16:59.461501   78861 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 02:16:59.645202   78861 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0717 02:16:59.645233   78861 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0717 02:16:59.645243   78861 machine.go:97] duration metric: took 1.07853223s to provisionDockerMachine
	I0717 02:16:59.645273   78861 fix.go:56] duration metric: took 1.100163436s for fixHost
	I0717 02:16:59.645279   78861 start.go:83] releasing machines lock for "newest-cni-386113", held for 1.100194017s
	W0717 02:16:59.645358   78861 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p newest-cni-386113" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	* Failed to start kvm2 VM. Running "minikube delete -p newest-cni-386113" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0717 02:16:59.647778   78861 out.go:177] 
	W0717 02:16:59.649249   78861 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0717 02:16:59.649265   78861 out.go:239] * 
	* 
	W0717 02:16:59.650065   78861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:16:59.652033   78861 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p newest-cni-386113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-386113 -n newest-cni-386113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-386113 -n newest-cni-386113: exit status 6 (224.399859ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 02:16:59.871962   79261 status.go:417] kubeconfig endpoint: get endpoint: "newest-cni-386113" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "newest-cni-386113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (26.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-386113 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-386113 -n newest-cni-386113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-386113 -n newest-cni-386113: exit status 6 (218.951918ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 02:17:00.313676   79315 status.go:417] kubeconfig endpoint: get endpoint: "newest-cni-386113" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "newest-cni-386113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-386113 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-386113 --alsologtostderr -v=1: exit status 80 (1.255368966s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-386113 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 02:17:00.364712   79345 out.go:291] Setting OutFile to fd 1 ...
	I0717 02:17:00.364829   79345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:17:00.364837   79345 out.go:304] Setting ErrFile to fd 2...
	I0717 02:17:00.364841   79345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:17:00.365383   79345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 02:17:00.365763   79345 out.go:298] Setting JSON to false
	I0717 02:17:00.365807   79345 mustload.go:65] Loading cluster: newest-cni-386113
	I0717 02:17:00.366337   79345 config.go:182] Loaded profile config "newest-cni-386113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 02:17:00.366760   79345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:17:00.366802   79345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:17:00.383294   79345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I0717 02:17:00.383788   79345 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:17:00.384291   79345 main.go:141] libmachine: Using API Version  1
	I0717 02:17:00.384314   79345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:17:00.384617   79345 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:17:00.384806   79345 main.go:141] libmachine: (newest-cni-386113) Calling .GetState
	I0717 02:17:00.386490   79345 host.go:66] Checking if "newest-cni-386113" exists ...
	I0717 02:17:00.386819   79345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:17:00.386867   79345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:17:00.401287   79345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41533
	I0717 02:17:00.401676   79345 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:17:00.402130   79345 main.go:141] libmachine: Using API Version  1
	I0717 02:17:00.402150   79345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:17:00.402420   79345 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:17:00.402577   79345 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:17:00.403262   79345 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.33.1-1721146474-19264/minikube-v1.33.1-1721146474-19264-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.33.1-1721146474-19264-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///syste
m listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/home/jenkins:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-386113 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bo
ol=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0717 02:17:00.406208   79345 out.go:177] * Pausing node newest-cni-386113 ... 
	I0717 02:17:00.407756   79345 host.go:66] Checking if "newest-cni-386113" exists ...
	I0717 02:17:00.408018   79345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 02:17:00.408048   79345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 02:17:00.423718   79345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41903
	I0717 02:17:00.424089   79345 main.go:141] libmachine: () Calling .GetVersion
	I0717 02:17:00.424549   79345 main.go:141] libmachine: Using API Version  1
	I0717 02:17:00.424572   79345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 02:17:00.424859   79345 main.go:141] libmachine: () Calling .GetMachineName
	I0717 02:17:00.425039   79345 main.go:141] libmachine: (newest-cni-386113) Calling .DriverName
	I0717 02:17:00.425239   79345 ssh_runner.go:195] Run: systemctl --version
	I0717 02:17:00.425268   79345 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHHostname
	I0717 02:17:00.427940   79345 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:17:00.428293   79345 main.go:141] libmachine: (newest-cni-386113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:8c:c1", ip: ""} in network mk-newest-cni-386113: {Iface:virbr1 ExpiryTime:2024-07-17 03:16:44 +0000 UTC Type:0 Mac:52:54:00:b3:8c:c1 Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:newest-cni-386113 Clientid:01:52:54:00:b3:8c:c1}
	I0717 02:17:00.428330   79345 main.go:141] libmachine: (newest-cni-386113) DBG | domain newest-cni-386113 has defined IP address 192.168.50.112 and MAC address 52:54:00:b3:8c:c1 in network mk-newest-cni-386113
	I0717 02:17:00.428440   79345 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHPort
	I0717 02:17:00.428588   79345 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHKeyPath
	I0717 02:17:00.428730   79345 main.go:141] libmachine: (newest-cni-386113) Calling .GetSSHUsername
	I0717 02:17:00.428855   79345 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/newest-cni-386113/id_rsa Username:docker}
	I0717 02:17:00.513342   79345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:17:00.529400   79345 pause.go:51] kubelet running: false
	I0717 02:17:00.529450   79345 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0717 02:17:00.544408   79345 retry.go:31] will retry after 277.511655ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0717 02:17:00.822955   79345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:17:00.837933   79345 pause.go:51] kubelet running: false
	I0717 02:17:00.837993   79345 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0717 02:17:00.852614   79345 retry.go:31] will retry after 291.635357ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0717 02:17:01.145174   79345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:17:01.161877   79345 pause.go:51] kubelet running: false
	I0717 02:17:01.161942   79345 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0717 02:17:01.176713   79345 retry.go:31] will retry after 364.367755ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0717 02:17:01.541337   79345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 02:17:01.557111   79345 pause.go:51] kubelet running: false
	I0717 02:17:01.557190   79345 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0717 02:17:01.574268   79345 out.go:177] 
	W0717 02:17:01.575733   79345 out.go:239] X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W0717 02:17:01.575750   79345 out.go:239] * 
	* 
	W0717 02:17:01.578705   79345 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:17:01.580077   79345 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-linux-amd64 pause -p newest-cni-386113 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-386113 -n newest-cni-386113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-386113 -n newest-cni-386113: exit status 6 (222.545025ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 02:17:01.789578   79382 status.go:417] kubeconfig endpoint: get endpoint: "newest-cni-386113" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "newest-cni-386113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-386113 -n newest-cni-386113
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-386113 -n newest-cni-386113: exit status 6 (223.962396ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 02:17:02.015967   79412 status.go:417] kubeconfig endpoint: get endpoint: "newest-cni-386113" does not appear in /home/jenkins/minikube-integration/19264-3908/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "newest-cni-386113" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (1.70s)

                                                
                                    

Test pass (246/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 77.03
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.2/json-events 18.33
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.06
18 TestDownloadOnly/v1.30.2/DeleteAll 0.13
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 68.81
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 119.99
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 211.34
38 TestAddons/parallel/Registry 24.94
40 TestAddons/parallel/InspektorGadget 12.43
42 TestAddons/parallel/HelmTiller 22.24
44 TestAddons/parallel/CSI 65.62
45 TestAddons/parallel/Headlamp 19.04
46 TestAddons/parallel/CloudSpanner 6.96
47 TestAddons/parallel/LocalPath 26.2
48 TestAddons/parallel/NvidiaDevicePlugin 6.72
49 TestAddons/parallel/Yakd 6.01
53 TestAddons/serial/GCPAuth/Namespaces 0.11
55 TestCertOptions 50.9
56 TestCertExpiration 318
58 TestForceSystemdFlag 93.8
59 TestForceSystemdEnv 46.02
61 TestKVMDriverInstallOrUpdate 9.38
65 TestErrorSpam/setup 41.12
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.71
68 TestErrorSpam/pause 1.59
69 TestErrorSpam/unpause 1.59
70 TestErrorSpam/stop 5.27
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 55.69
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.86
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.36
82 TestFunctional/serial/CacheCmd/cache/add_local 2.87
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
87 TestFunctional/serial/CacheCmd/cache/delete 0.08
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 59.47
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.42
93 TestFunctional/serial/LogsFileCmd 1.44
94 TestFunctional/serial/InvalidService 3.53
96 TestFunctional/parallel/ConfigCmd 0.28
97 TestFunctional/parallel/DashboardCmd 22.41
98 TestFunctional/parallel/DryRun 0.28
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.74
104 TestFunctional/parallel/ServiceCmdConnect 66.41
105 TestFunctional/parallel/AddonsCmd 0.11
108 TestFunctional/parallel/SSHCmd 0.38
109 TestFunctional/parallel/CpCmd 1.15
110 TestFunctional/parallel/MySQL 68.74
111 TestFunctional/parallel/FileSync 0.19
112 TestFunctional/parallel/CertSync 1.18
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
120 TestFunctional/parallel/License 0.82
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
122 TestFunctional/parallel/Version/short 0.04
123 TestFunctional/parallel/Version/components 0.63
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.92
130 TestFunctional/parallel/ImageCommands/Setup 2.88
138 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
139 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
140 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
142 TestFunctional/parallel/ProfileCmd/profile_list 0.25
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.05
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
151 TestFunctional/parallel/ServiceCmd/DeployApp 60.16
152 TestFunctional/parallel/MountCmd/any-port 9.46
153 TestFunctional/parallel/ServiceCmd/List 0.43
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
155 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
156 TestFunctional/parallel/ServiceCmd/Format 0.29
157 TestFunctional/parallel/ServiceCmd/URL 0.32
158 TestFunctional/parallel/MountCmd/specific-port 1.8
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.11
160 TestFunctional/delete_echo-server_images 0.03
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 293.06
167 TestMultiControlPlane/serial/DeployApp 7.5
168 TestMultiControlPlane/serial/PingHostFromPods 1.18
169 TestMultiControlPlane/serial/AddWorkerNode 61.04
170 TestMultiControlPlane/serial/NodeLabels 0.06
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.39
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.45
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
178 TestMultiControlPlane/serial/DeleteSecondaryNode 16.95
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 352.03
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
183 TestMultiControlPlane/serial/AddSecondaryNode 81.5
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
188 TestJSONOutput/start/Command 54.54
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.68
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.63
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.33
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 84.75
220 TestMountStart/serial/StartWithMountFirst 31.06
221 TestMountStart/serial/VerifyMountFirst 0.42
222 TestMountStart/serial/StartWithMountSecond 24.68
223 TestMountStart/serial/VerifyMountSecond 0.37
224 TestMountStart/serial/DeleteFirst 0.67
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.28
227 TestMountStart/serial/RestartStopped 22.59
228 TestMountStart/serial/VerifyMountPostStop 0.37
231 TestMultiNode/serial/FreshStart2Nodes 124.65
232 TestMultiNode/serial/DeployApp2Nodes 6.82
233 TestMultiNode/serial/PingHostFrom2Pods 0.78
234 TestMultiNode/serial/AddNode 49.95
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.22
237 TestMultiNode/serial/CopyFile 7.11
238 TestMultiNode/serial/StopNode 2.34
239 TestMultiNode/serial/StartAfterStop 39.82
241 TestMultiNode/serial/DeleteNode 2.22
243 TestMultiNode/serial/RestartMultiNode 181.5
244 TestMultiNode/serial/ValidateNameConflict 41.26
251 TestScheduledStopUnix 110.41
255 TestRunningBinaryUpgrade 217.68
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 89.46
269 TestNetworkPlugins/group/false 2.77
273 TestNoKubernetes/serial/StartWithStopK8s 41.81
274 TestNoKubernetes/serial/Start 46.59
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
276 TestNoKubernetes/serial/ProfileList 0.79
277 TestNoKubernetes/serial/Stop 1.33
278 TestNoKubernetes/serial/StartNoArgs 62.63
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
287 TestStoppedBinaryUpgrade/Setup 3.25
288 TestStoppedBinaryUpgrade/Upgrade 97.89
290 TestPause/serial/Start 96.26
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
292 TestNetworkPlugins/group/auto/Start 112.37
293 TestNetworkPlugins/group/kindnet/Start 103.29
295 TestNetworkPlugins/group/auto/KubeletFlags 0.22
296 TestNetworkPlugins/group/auto/NetCatPod 11.22
297 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
299 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
300 TestNetworkPlugins/group/auto/DNS 0.27
301 TestNetworkPlugins/group/auto/Localhost 0.18
302 TestNetworkPlugins/group/auto/HairPin 0.21
303 TestNetworkPlugins/group/calico/Start 87.96
304 TestNetworkPlugins/group/kindnet/DNS 0.2
305 TestNetworkPlugins/group/kindnet/Localhost 0.21
306 TestNetworkPlugins/group/kindnet/HairPin 0.17
307 TestNetworkPlugins/group/custom-flannel/Start 96
308 TestNetworkPlugins/group/enable-default-cni/Start 100.3
309 TestNetworkPlugins/group/flannel/Start 122.87
310 TestNetworkPlugins/group/calico/ControllerPod 5.09
311 TestNetworkPlugins/group/calico/KubeletFlags 0.54
312 TestNetworkPlugins/group/calico/NetCatPod 12.63
313 TestNetworkPlugins/group/calico/DNS 0.16
314 TestNetworkPlugins/group/calico/Localhost 0.12
315 TestNetworkPlugins/group/calico/HairPin 0.12
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.29
318 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
319 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
320 TestNetworkPlugins/group/custom-flannel/DNS 0.22
321 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
322 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
323 TestNetworkPlugins/group/bridge/Start 102.27
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
330 TestStartStop/group/no-preload/serial/FirstStart 164.78
331 TestNetworkPlugins/group/flannel/ControllerPod 6.01
332 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
333 TestNetworkPlugins/group/flannel/NetCatPod 11.23
334 TestNetworkPlugins/group/flannel/DNS 0.3
335 TestNetworkPlugins/group/flannel/Localhost 0.14
336 TestNetworkPlugins/group/flannel/HairPin 0.14
338 TestStartStop/group/embed-certs/serial/FirstStart 62.95
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
340 TestNetworkPlugins/group/bridge/NetCatPod 11.2
341 TestNetworkPlugins/group/bridge/DNS 0.17
342 TestNetworkPlugins/group/bridge/Localhost 0.13
343 TestNetworkPlugins/group/bridge/HairPin 0.16
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.39
346 TestStartStop/group/embed-certs/serial/DeployApp 10.28
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.29
350 TestStartStop/group/no-preload/serial/DeployApp 12.28
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.91
358 TestStartStop/group/embed-certs/serial/SecondStart 636.58
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 530.2
362 TestStartStop/group/no-preload/serial/SecondStart 664.97
363 TestStartStop/group/old-k8s-version/serial/Stop 1.34
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
375 TestStartStop/group/newest-cni/serial/FirstStart 48.36
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.41
378 TestStartStop/group/newest-cni/serial/Stop 7.4
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/json-events (77.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-962960 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-962960 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (1m17.033114089s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (77.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-962960
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-962960: exit status 85 (55.832339ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-962960 | jenkins | v1.33.1 | 17 Jul 24 00:21 UTC |          |
	|         | -p download-only-962960        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:21:41
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:21:41.020050   11271 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:21:41.020307   11271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:21:41.020318   11271 out.go:304] Setting ErrFile to fd 2...
	I0717 00:21:41.020324   11271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:21:41.020491   11271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	W0717 00:21:41.020620   11271 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19264-3908/.minikube/config/config.json: open /home/jenkins/minikube-integration/19264-3908/.minikube/config/config.json: no such file or directory
	I0717 00:21:41.021194   11271 out.go:298] Setting JSON to true
	I0717 00:21:41.022062   11271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":243,"bootTime":1721175458,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:21:41.022118   11271 start.go:139] virtualization: kvm guest
	I0717 00:21:41.024605   11271 out.go:97] [download-only-962960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0717 00:21:41.024692   11271 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 00:21:41.024728   11271 notify.go:220] Checking for updates...
	I0717 00:21:41.026067   11271 out.go:169] MINIKUBE_LOCATION=19264
	I0717 00:21:41.027373   11271 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:21:41.028780   11271 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:21:41.030003   11271 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:21:41.031159   11271 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 00:21:41.033605   11271 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:21:41.033819   11271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:21:41.131450   11271 out.go:97] Using the kvm2 driver based on user configuration
	I0717 00:21:41.131490   11271 start.go:297] selected driver: kvm2
	I0717 00:21:41.131502   11271 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:21:41.131887   11271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:21:41.132025   11271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:21:41.146366   11271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:21:41.146440   11271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:21:41.147129   11271 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 00:21:41.147334   11271 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:21:41.147405   11271 cni.go:84] Creating CNI manager for ""
	I0717 00:21:41.147420   11271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:21:41.147429   11271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:21:41.147503   11271 start.go:340] cluster config:
	{Name:download-only-962960 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-962960 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:21:41.147715   11271 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:21:41.149461   11271 out.go:97] Downloading VM boot image ...
	I0717 00:21:41.149497   11271 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 00:21:59.828882   11271 out.go:97] Starting "download-only-962960" primary control-plane node in "download-only-962960" cluster
	I0717 00:21:59.828908   11271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 00:21:59.988513   11271 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 00:21:59.988556   11271 cache.go:56] Caching tarball of preloaded images
	I0717 00:21:59.988720   11271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 00:21:59.990514   11271 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 00:21:59.990545   11271 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:22:00.145448   11271 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 00:22:23.040153   11271 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:22:23.040272   11271 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:22:23.945515   11271 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 00:22:23.945889   11271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/download-only-962960/config.json ...
	I0717 00:22:23.945930   11271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/download-only-962960/config.json: {Name:mk84faee21403f2a8c699521d5646dd449c58e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:22:23.946103   11271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 00:22:23.946284   11271 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-962960 host does not exist
	  To start a cluster, run: "minikube start -p download-only-962960"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-962960
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (18.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-030322 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-030322 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (18.328315084s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (18.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-030322
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-030322: exit status 85 (57.280904ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-962960 | jenkins | v1.33.1 | 17 Jul 24 00:21 UTC |                     |
	|         | -p download-only-962960        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:22 UTC | 17 Jul 24 00:22 UTC |
	| delete  | -p download-only-962960        | download-only-962960 | jenkins | v1.33.1 | 17 Jul 24 00:22 UTC | 17 Jul 24 00:22 UTC |
	| start   | -o=json --download-only        | download-only-030322 | jenkins | v1.33.1 | 17 Jul 24 00:22 UTC |                     |
	|         | -p download-only-030322        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:22:58
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:22:58.367245   12111 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:22:58.367354   12111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:22:58.367358   12111 out.go:304] Setting ErrFile to fd 2...
	I0717 00:22:58.367363   12111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:22:58.367547   12111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:22:58.368109   12111 out.go:298] Setting JSON to true
	I0717 00:22:58.368941   12111 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":320,"bootTime":1721175458,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:22:58.369010   12111 start.go:139] virtualization: kvm guest
	I0717 00:22:58.371205   12111 out.go:97] [download-only-030322] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:22:58.371340   12111 notify.go:220] Checking for updates...
	I0717 00:22:58.372855   12111 out.go:169] MINIKUBE_LOCATION=19264
	I0717 00:22:58.374404   12111 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:22:58.375792   12111 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:22:58.377309   12111 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:22:58.378580   12111 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 00:22:58.381416   12111 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:22:58.381678   12111 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:22:58.413989   12111 out.go:97] Using the kvm2 driver based on user configuration
	I0717 00:22:58.414018   12111 start.go:297] selected driver: kvm2
	I0717 00:22:58.414025   12111 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:22:58.414447   12111 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:22:58.414567   12111 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:22:58.429781   12111 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:22:58.429826   12111 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:22:58.430317   12111 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 00:22:58.430452   12111 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:22:58.430496   12111 cni.go:84] Creating CNI manager for ""
	I0717 00:22:58.430507   12111 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:22:58.430519   12111 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:22:58.430602   12111 start.go:340] cluster config:
	{Name:download-only-030322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-030322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:22:58.430699   12111 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:22:58.432441   12111 out.go:97] Starting "download-only-030322" primary control-plane node in "download-only-030322" cluster
	I0717 00:22:58.432457   12111 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:22:58.585689   12111 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:22:58.585732   12111 cache.go:56] Caching tarball of preloaded images
	I0717 00:22:58.585893   12111 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:22:58.587633   12111 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 00:22:58.587646   12111 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:22:58.742860   12111 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:cd14409e225276132db5cf7d5d75c2d2 -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-030322 host does not exist
	  To start a cluster, run: "minikube start -p download-only-030322"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-030322
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (68.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-703106 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-703106 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (1m8.812870363s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (68.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-703106
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-703106: exit status 85 (57.056361ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-962960 | jenkins | v1.33.1 | 17 Jul 24 00:21 UTC |                     |
	|         | -p download-only-962960             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:22 UTC | 17 Jul 24 00:22 UTC |
	| delete  | -p download-only-962960             | download-only-962960 | jenkins | v1.33.1 | 17 Jul 24 00:22 UTC | 17 Jul 24 00:22 UTC |
	| start   | -o=json --download-only             | download-only-030322 | jenkins | v1.33.1 | 17 Jul 24 00:22 UTC |                     |
	|         | -p download-only-030322             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:23 UTC | 17 Jul 24 00:23 UTC |
	| delete  | -p download-only-030322             | download-only-030322 | jenkins | v1.33.1 | 17 Jul 24 00:23 UTC | 17 Jul 24 00:23 UTC |
	| start   | -o=json --download-only             | download-only-703106 | jenkins | v1.33.1 | 17 Jul 24 00:23 UTC |                     |
	|         | -p download-only-703106             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:23:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:23:17.000869   12352 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:23:17.000996   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:23:17.001005   12352 out.go:304] Setting ErrFile to fd 2...
	I0717 00:23:17.001008   12352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:23:17.001183   12352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:23:17.001693   12352 out.go:298] Setting JSON to true
	I0717 00:23:17.002459   12352 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":339,"bootTime":1721175458,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:23:17.002512   12352 start.go:139] virtualization: kvm guest
	I0717 00:23:17.004597   12352 out.go:97] [download-only-703106] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:23:17.004737   12352 notify.go:220] Checking for updates...
	I0717 00:23:17.006040   12352 out.go:169] MINIKUBE_LOCATION=19264
	I0717 00:23:17.007425   12352 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:23:17.008632   12352 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:23:17.010043   12352 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:23:17.011293   12352 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 00:23:17.013822   12352 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:23:17.014034   12352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:23:17.045435   12352 out.go:97] Using the kvm2 driver based on user configuration
	I0717 00:23:17.045456   12352 start.go:297] selected driver: kvm2
	I0717 00:23:17.045466   12352 start.go:901] validating driver "kvm2" against <nil>
	I0717 00:23:17.045776   12352 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:23:17.045846   12352 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19264-3908/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 00:23:17.060052   12352 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 00:23:17.060092   12352 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:23:17.060586   12352 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 00:23:17.060730   12352 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:23:17.060782   12352 cni.go:84] Creating CNI manager for ""
	I0717 00:23:17.060794   12352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 00:23:17.060801   12352 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:23:17.060853   12352 start.go:340] cluster config:
	{Name:download-only-703106 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-703106 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:23:17.060938   12352 iso.go:125] acquiring lock: {Name:mk1e382ea32906eee794c9b90164432d2562b456 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:23:17.062798   12352 out.go:97] Starting "download-only-703106" primary control-plane node in "download-only-703106" cluster
	I0717 00:23:17.062816   12352 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 00:23:17.216549   12352 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 00:23:17.216613   12352 cache.go:56] Caching tarball of preloaded images
	I0717 00:23:17.216826   12352 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 00:23:17.218736   12352 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0717 00:23:17.218750   12352 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:23:17.373055   12352 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 00:23:35.019011   12352 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:23:35.019125   12352 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19264-3908/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:23:35.757393   12352 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 00:23:35.757745   12352 profile.go:143] Saving config to /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/download-only-703106/config.json ...
	I0717 00:23:35.757777   12352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/download-only-703106/config.json: {Name:mk1c83df4e7202d2ce071530921dde7dd35419c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:23:35.757952   12352 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 00:23:35.758122   12352 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19264-3908/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-703106 host does not exist
	  To start a cluster, run: "minikube start -p download-only-703106"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-703106
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-874768 --alsologtostderr --binary-mirror http://127.0.0.1:33007 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-874768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-874768
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (119.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-089839 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-089839 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m59.162687793s)
helpers_test.go:175: Cleaning up "offline-crio-089839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-089839
--- PASS: TestOffline (119.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-384227
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-384227: exit status 85 (49.299481ms)

                                                
                                                
-- stdout --
	* Profile "addons-384227" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-384227"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-384227
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-384227: exit status 85 (47.778251ms)

                                                
                                                
-- stdout --
	* Profile "addons-384227" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-384227"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (211.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-384227 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-384227 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m31.338036309s)
--- PASS: TestAddons/Setup (211.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (24.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 21.483902ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-wjhgl" [3387114c-1fe0-4740-98da-750978da9284] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00624982s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-n2f8j" [b4af5a32-5f55-4f42-8506-d84f33c037ee] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005709695s
addons_test.go:342: (dbg) Run:  kubectl --context addons-384227 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-384227 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-384227 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (13.100467849s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 ip
2024/07/17 00:28:22 [DEBUG] GET http://192.168.39.177:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (24.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.43s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tm95k" [d4eeb85f-3b25-4d4e-8774-8cc60084a5ee] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003602628s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-384227
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-384227: (6.430182448s)
--- PASS: TestAddons/parallel/InspektorGadget (12.43s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (22.24s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.319512ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-h842v" [39eb0880-886d-42e4-b134-ac0f48c445e8] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005031397s
addons_test.go:475: (dbg) Run:  kubectl --context addons-384227 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-384227 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (15.623809187s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (22.24s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 7.029912ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-384227 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-384227 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [43df9ef5-4446-451b-8ba7-2dc18b6d4260] Pending
helpers_test.go:344: "task-pv-pod" [43df9ef5-4446-451b-8ba7-2dc18b6d4260] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [43df9ef5-4446-451b-8ba7-2dc18b6d4260] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004265355s
addons_test.go:586: (dbg) Run:  kubectl --context addons-384227 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-384227 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-384227 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-384227 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-384227 delete pod task-pv-pod: (1.194941693s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-384227 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-384227 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-384227 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0f3de1d2-204e-4401-9f89-414688faf53a] Pending
helpers_test.go:344: "task-pv-pod-restore" [0f3de1d2-204e-4401-9f89-414688faf53a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0f3de1d2-204e-4401-9f89-414688faf53a] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.066701536s
addons_test.go:628: (dbg) Run:  kubectl --context addons-384227 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-384227 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-384227 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-384227 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.746085427s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-384227 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-384227 --alsologtostderr -v=1: (1.038323723s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-7xwpc" [f011f7f1-0d18-4279-850b-076dcfcd6908] Pending
helpers_test.go:344: "headlamp-7867546754-7xwpc" [f011f7f1-0d18-4279-850b-076dcfcd6908] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-7xwpc" [f011f7f1-0d18-4279-850b-076dcfcd6908] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-7xwpc" [f011f7f1-0d18-4279-850b-076dcfcd6908] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.004080091s
--- PASS: TestAddons/parallel/Headlamp (19.04s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-k6bx4" [76d45b5d-40f5-4f3b-b263-6705e9d021d9] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005107922s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-384227
--- PASS: TestAddons/parallel/CloudSpanner (6.96s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (26.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-384227 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-384227 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384227 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [918cfc73-5e90-4b0a-a4d4-e3aaa7664ab6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [918cfc73-5e90-4b0a-a4d4-e3aaa7664ab6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [918cfc73-5e90-4b0a-a4d4-e3aaa7664ab6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 18.004142973s
addons_test.go:992: (dbg) Run:  kubectl --context addons-384227 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 ssh "cat /opt/local-path-provisioner/pvc-d8a1bc13-63c9-4ac2-b2eb-d06e01a50e0a_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-384227 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-384227 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-384227 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (26.20s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-v6tmh" [cbb5bf86-4332-4b45-b6cf-4c77245158ed] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006998838s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-384227
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.72s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-5nswx" [f7985a97-d5e5-4554-b699-b8a01e187c7e] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004242987s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-384227 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-384227 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (50.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-366095 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-366095 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (49.468749007s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-366095 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-366095 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-366095 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-366095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-366095
--- PASS: TestCertOptions (50.90s)

                                                
                                    
x
+
TestCertExpiration (318s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-733994 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-733994 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m35.333194285s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-733994 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0717 01:40:17.179403   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-733994 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (41.846871143s)
helpers_test.go:175: Cleaning up "cert-expiration-733994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-733994
--- PASS: TestCertExpiration (318.00s)

                                                
                                    
x
+
TestForceSystemdFlag (93.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-323390 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-323390 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m32.814910035s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-323390 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-323390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-323390
--- PASS: TestForceSystemdFlag (93.80s)

                                                
                                    
x
+
TestForceSystemdEnv (46.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-195512 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-195512 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.247675707s)
helpers_test.go:175: Cleaning up "force-systemd-env-195512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-195512
--- PASS: TestForceSystemdEnv (46.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (9.38s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (9.38s)

                                                
                                    
x
+
TestErrorSpam/setup (41.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-812302 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-812302 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-812302 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-812302 --driver=kvm2  --container-runtime=crio: (41.119077903s)
--- PASS: TestErrorSpam/setup (41.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (5.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 stop: (2.276129685s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 stop: (1.736994148s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-812302 --log_dir /tmp/nospam-812302 stop: (1.259882971s)
--- PASS: TestErrorSpam/stop (5.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19264-3908/.minikube/files/etc/test/nested/copy/11259/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023523 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0717 00:37:58.379187   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:37:58.385017   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:37:58.395302   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:37:58.415568   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:37:58.455891   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:37:58.536209   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:37:58.696663   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:37:59.017254   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:37:59.658193   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:38:00.938991   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:38:03.500101   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:38:08.620573   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:38:18.861159   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-023523 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.686271533s)
--- PASS: TestFunctional/serial/StartWithProxy (55.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023523 --alsologtostderr -v=8
E0717 00:38:39.342218   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-023523 --alsologtostderr -v=8: (35.861896133s)
functional_test.go:659: soft start took 35.862556204s for "functional-023523" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-023523 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-023523 cache add registry.k8s.io/pause:3.1: (1.035033288s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-023523 cache add registry.k8s.io/pause:3.3: (1.14112353s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-023523 cache add registry.k8s.io/pause:latest: (1.179796083s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-023523 /tmp/TestFunctionalserialCacheCmdcacheadd_local2102113077/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 cache add minikube-local-cache-test:functional-023523
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-023523 cache add minikube-local-cache-test:functional-023523: (2.571214702s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 cache delete minikube-local-cache-test:functional-023523
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-023523
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023523 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (208.02977ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 kubectl -- --context functional-023523 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-023523 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (59.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 00:39:20.302674   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-023523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.470105132s)
functional_test.go:757: restart took 59.470249484s for "functional-023523" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (59.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-023523 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-023523 logs: (1.424535439s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 logs --file /tmp/TestFunctionalserialLogsFileCmd3000885154/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-023523 logs --file /tmp/TestFunctionalserialLogsFileCmd3000885154/001/logs.txt: (1.443510624s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-023523 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-023523
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-023523: exit status 115 (272.949019ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.2:32366 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-023523 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023523 config get cpus: exit status 14 (42.187257ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023523 config get cpus: exit status 14 (38.646628ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-023523 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-023523 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22174: os: process already finished
E0717 00:42:58.380092   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:43:26.064808   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DashboardCmd (22.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023523 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-023523 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (135.426182ms)

                                                
                                                
-- stdout --
	* [functional-023523] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:41:26.897250   21828 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:41:26.897536   21828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:41:26.897547   21828 out.go:304] Setting ErrFile to fd 2...
	I0717 00:41:26.897553   21828 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:41:26.897823   21828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:41:26.898400   21828 out.go:298] Setting JSON to false
	I0717 00:41:26.899822   21828 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1429,"bootTime":1721175458,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:41:26.899939   21828 start.go:139] virtualization: kvm guest
	I0717 00:41:26.902369   21828 out.go:177] * [functional-023523] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:41:26.904002   21828 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:41:26.904021   21828 notify.go:220] Checking for updates...
	I0717 00:41:26.905568   21828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:41:26.907210   21828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:41:26.911585   21828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:41:26.913111   21828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:41:26.914695   21828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:41:26.916421   21828 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:41:26.916887   21828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:26.916921   21828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:26.932576   21828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37597
	I0717 00:41:26.932919   21828 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:26.933418   21828 main.go:141] libmachine: Using API Version  1
	I0717 00:41:26.933450   21828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:26.933774   21828 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:26.933993   21828 main.go:141] libmachine: (functional-023523) Calling .DriverName
	I0717 00:41:26.934256   21828 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:41:26.934545   21828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:26.934622   21828 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:26.949031   21828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38493
	I0717 00:41:26.949320   21828 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:26.949724   21828 main.go:141] libmachine: Using API Version  1
	I0717 00:41:26.949752   21828 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:26.950016   21828 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:26.950194   21828 main.go:141] libmachine: (functional-023523) Calling .DriverName
	I0717 00:41:26.980125   21828 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 00:41:26.981344   21828 start.go:297] selected driver: kvm2
	I0717 00:41:26.981361   21828 start.go:901] validating driver "kvm2" against &{Name:functional-023523 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-023523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:41:26.981471   21828 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:41:26.983509   21828 out.go:177] 
	W0717 00:41:26.984682   21828 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 00:41:26.985869   21828 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023523 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-023523 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-023523 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.801458ms)

                                                
                                                
-- stdout --
	* [functional-023523] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:41:25.515490   21439 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:41:25.515603   21439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:41:25.515613   21439 out.go:304] Setting ErrFile to fd 2...
	I0717 00:41:25.515620   21439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:41:25.515900   21439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 00:41:25.516424   21439 out.go:298] Setting JSON to false
	I0717 00:41:25.517272   21439 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1427,"bootTime":1721175458,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:41:25.517325   21439 start.go:139] virtualization: kvm guest
	I0717 00:41:25.519768   21439 out.go:177] * [functional-023523] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0717 00:41:25.521696   21439 notify.go:220] Checking for updates...
	I0717 00:41:25.521735   21439 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:41:25.523321   21439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:41:25.524862   21439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 00:41:25.526280   21439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 00:41:25.527655   21439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:41:25.528924   21439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:41:25.530635   21439 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:41:25.531078   21439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:25.531117   21439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:25.546496   21439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35675
	I0717 00:41:25.546902   21439 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:25.547411   21439 main.go:141] libmachine: Using API Version  1
	I0717 00:41:25.547434   21439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:25.547722   21439 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:25.547893   21439 main.go:141] libmachine: (functional-023523) Calling .DriverName
	I0717 00:41:25.548130   21439 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:41:25.548399   21439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 00:41:25.548433   21439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 00:41:25.567093   21439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0717 00:41:25.567475   21439 main.go:141] libmachine: () Calling .GetVersion
	I0717 00:41:25.567948   21439 main.go:141] libmachine: Using API Version  1
	I0717 00:41:25.567974   21439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 00:41:25.568329   21439 main.go:141] libmachine: () Calling .GetMachineName
	I0717 00:41:25.568509   21439 main.go:141] libmachine: (functional-023523) Calling .DriverName
	I0717 00:41:25.602542   21439 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0717 00:41:25.603859   21439 start.go:297] selected driver: kvm2
	I0717 00:41:25.603876   21439 start.go:901] validating driver "kvm2" against &{Name:functional-023523 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-023523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:41:25.604008   21439 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:41:25.606211   21439 out.go:177] 
	W0717 00:41:25.607612   21439 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 00:41:25.608946   21439 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (66.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-023523 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-023523 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-t5ssg" [2f67a68e-2a30-4bcc-89c7-cd3c2059ac27] Pending
helpers_test.go:344: "hello-node-connect-57b4589c47-t5ssg" [2f67a68e-2a30-4bcc-89c7-cd3c2059ac27] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-t5ssg" [2f67a68e-2a30-4bcc-89c7-cd3c2059ac27] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 1m6.004416328s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.2:30290
functional_test.go:1671: http://192.168.39.2:30290: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-t5ssg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.2:30290
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (66.41s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh -n functional-023523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 cp functional-023523:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd805041304/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh -n functional-023523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh -n functional-023523 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (68.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-023523 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-l95r4" [160e20ce-943b-4253-9ed5-c7eced8fd388] Pending
helpers_test.go:344: "mysql-64454c8b5c-l95r4" [160e20ce-943b-4253-9ed5-c7eced8fd388] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-l95r4" [160e20ce-943b-4253-9ed5-c7eced8fd388] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m5.004316869s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-023523 exec mysql-64454c8b5c-l95r4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-023523 exec mysql-64454c8b5c-l95r4 -- mysql -ppassword -e "show databases;": exit status 1 (131.881221ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-023523 exec mysql-64454c8b5c-l95r4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-023523 exec mysql-64454c8b5c-l95r4 -- mysql -ppassword -e "show databases;": exit status 1 (141.581876ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-023523 exec mysql-64454c8b5c-l95r4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (68.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11259/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo cat /etc/test/nested/copy/11259/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11259.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo cat /etc/ssl/certs/11259.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11259.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo cat /usr/share/ca-certificates/11259.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/112592.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo cat /etc/ssl/certs/112592.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/112592.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo cat /usr/share/ca-certificates/112592.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-023523 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023523 ssh "sudo systemctl is-active docker": exit status 1 (207.483026ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023523 ssh "sudo systemctl is-active containerd": exit status 1 (186.059983ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-023523 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-023523
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240513-cd2ac642
docker.io/kicbase/echo-server:functional-023523
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-023523 image ls --format short --alsologtostderr:
I0717 00:41:28.781569   22151 out.go:291] Setting OutFile to fd 1 ...
I0717 00:41:28.781694   22151 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:41:28.781705   22151 out.go:304] Setting ErrFile to fd 2...
I0717 00:41:28.781709   22151 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:41:28.781906   22151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
I0717 00:41:28.782618   22151 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:41:28.782760   22151 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:41:28.783192   22151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:41:28.783248   22151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:41:28.798331   22151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
I0717 00:41:28.798798   22151 main.go:141] libmachine: () Calling .GetVersion
I0717 00:41:28.799341   22151 main.go:141] libmachine: Using API Version  1
I0717 00:41:28.799370   22151 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:41:28.799721   22151 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:41:28.799945   22151 main.go:141] libmachine: (functional-023523) Calling .GetState
I0717 00:41:28.801862   22151 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:41:28.801908   22151 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:41:28.816617   22151 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37909
I0717 00:41:28.817080   22151 main.go:141] libmachine: () Calling .GetVersion
I0717 00:41:28.817640   22151 main.go:141] libmachine: Using API Version  1
I0717 00:41:28.817667   22151 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:41:28.817988   22151 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:41:28.818164   22151 main.go:141] libmachine: (functional-023523) Calling .DriverName
I0717 00:41:28.818376   22151 ssh_runner.go:195] Run: systemctl --version
I0717 00:41:28.818395   22151 main.go:141] libmachine: (functional-023523) Calling .GetSSHHostname
I0717 00:41:28.821041   22151 main.go:141] libmachine: (functional-023523) DBG | domain functional-023523 has defined MAC address 52:54:00:c6:46:06 in network mk-functional-023523
I0717 00:41:28.821529   22151 main.go:141] libmachine: (functional-023523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:06", ip: ""} in network mk-functional-023523: {Iface:virbr1 ExpiryTime:2024-07-17 01:37:44 +0000 UTC Type:0 Mac:52:54:00:c6:46:06 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:functional-023523 Clientid:01:52:54:00:c6:46:06}
I0717 00:41:28.821561   22151 main.go:141] libmachine: (functional-023523) DBG | domain functional-023523 has defined IP address 192.168.39.2 and MAC address 52:54:00:c6:46:06 in network mk-functional-023523
I0717 00:41:28.821718   22151 main.go:141] libmachine: (functional-023523) Calling .GetSSHPort
I0717 00:41:28.821898   22151 main.go:141] libmachine: (functional-023523) Calling .GetSSHKeyPath
I0717 00:41:28.822082   22151 main.go:141] libmachine: (functional-023523) Calling .GetSSHUsername
I0717 00:41:28.822229   22151 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/functional-023523/id_rsa Username:docker}
I0717 00:41:28.905304   22151 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 00:41:28.957869   22151 main.go:141] libmachine: Making call to close driver server
I0717 00:41:28.957883   22151 main.go:141] libmachine: (functional-023523) Calling .Close
I0717 00:41:28.958142   22151 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:41:28.958160   22151 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:41:28.958173   22151 main.go:141] libmachine: Making call to close driver server
I0717 00:41:28.958181   22151 main.go:141] libmachine: (functional-023523) Calling .Close
I0717 00:41:28.958389   22151 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:41:28.958407   22151 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:41:28.958476   22151 main.go:141] libmachine: (functional-023523) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-023523 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e874818b3caac | 112MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-023523  | 91348a8203b32 | 3.33kB |
| localhost/my-image                      | functional-023523  | 493686b8b6925 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/kicbase/echo-server           | functional-023523  | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-proxy              | v1.30.2            | 53c535741fb44 | 86MB   |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | ac1c61439df46 | 65.9MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 56ce0fd9fb532 | 118MB  |
| registry.k8s.io/kube-scheduler          | v1.30.2            | 7820c83aa1394 | 63.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-023523 image ls --format table --alsologtostderr:
I0717 00:41:33.433766   22325 out.go:291] Setting OutFile to fd 1 ...
I0717 00:41:33.433849   22325 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:41:33.433857   22325 out.go:304] Setting ErrFile to fd 2...
I0717 00:41:33.433861   22325 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:41:33.434038   22325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
I0717 00:41:33.434528   22325 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:41:33.434653   22325 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:41:33.435013   22325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:41:33.435050   22325 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:41:33.449537   22325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45041
I0717 00:41:33.449933   22325 main.go:141] libmachine: () Calling .GetVersion
I0717 00:41:33.450481   22325 main.go:141] libmachine: Using API Version  1
I0717 00:41:33.450504   22325 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:41:33.450819   22325 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:41:33.450985   22325 main.go:141] libmachine: (functional-023523) Calling .GetState
I0717 00:41:33.452778   22325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:41:33.452810   22325 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:41:33.466948   22325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
I0717 00:41:33.467292   22325 main.go:141] libmachine: () Calling .GetVersion
I0717 00:41:33.467656   22325 main.go:141] libmachine: Using API Version  1
I0717 00:41:33.467676   22325 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:41:33.467964   22325 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:41:33.468164   22325 main.go:141] libmachine: (functional-023523) Calling .DriverName
I0717 00:41:33.468367   22325 ssh_runner.go:195] Run: systemctl --version
I0717 00:41:33.468397   22325 main.go:141] libmachine: (functional-023523) Calling .GetSSHHostname
I0717 00:41:33.471202   22325 main.go:141] libmachine: (functional-023523) DBG | domain functional-023523 has defined MAC address 52:54:00:c6:46:06 in network mk-functional-023523
I0717 00:41:33.471535   22325 main.go:141] libmachine: (functional-023523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:06", ip: ""} in network mk-functional-023523: {Iface:virbr1 ExpiryTime:2024-07-17 01:37:44 +0000 UTC Type:0 Mac:52:54:00:c6:46:06 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:functional-023523 Clientid:01:52:54:00:c6:46:06}
I0717 00:41:33.471568   22325 main.go:141] libmachine: (functional-023523) DBG | domain functional-023523 has defined IP address 192.168.39.2 and MAC address 52:54:00:c6:46:06 in network mk-functional-023523
I0717 00:41:33.471672   22325 main.go:141] libmachine: (functional-023523) Calling .GetSSHPort
I0717 00:41:33.471840   22325 main.go:141] libmachine: (functional-023523) Calling .GetSSHKeyPath
I0717 00:41:33.471974   22325 main.go:141] libmachine: (functional-023523) Calling .GetSSHUsername
I0717 00:41:33.472081   22325 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/functional-023523/id_rsa Username:docker}
I0717 00:41:33.553523   22325 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 00:41:33.608274   22325 main.go:141] libmachine: Making call to close driver server
I0717 00:41:33.608294   22325 main.go:141] libmachine: (functional-023523) Calling .Close
I0717 00:41:33.608576   22325 main.go:141] libmachine: (functional-023523) DBG | Closing plugin on server side
I0717 00:41:33.608620   22325 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:41:33.608640   22325 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:41:33.608657   22325 main.go:141] libmachine: Making call to close driver server
I0717 00:41:33.608668   22325 main.go:141] libmachine: (functional-023523) Calling .Close
I0717 00:41:33.608890   22325 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:41:33.608993   22325 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:41:33.609025   22325 main.go:141] libmachine: (functional-023523) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-023523 image ls --format json --alsologtostderr:
[{"id":"ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"65908273"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f
8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117609954"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"63051080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["regist
ry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-023523"],"size":"4943877"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"493686b8b69256c300218c7a0b03622c1825114fa9e71516a41a2da11b81c840","repoDigests":["localhost/my-image@sha256:fe20668042a883dcc05a2ea2426a2753b76bdda77918bdf0145939308af6d001"],"repoTags":["localhost/my-image:functional-023523"],"size":"1468600"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd
775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"112194888"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"85953433"},{"id":"91348a8203b323c742332e395e797bc8b9ec2d8b08f25af32703611c6d40f141","repoDigests":["localhost/minikube-local-cache-test@sha256:c87ddb8470e2187926109e69d13b0375cbb5e8af0922df570caf18c52d967de5"],"repoTags":["localhost/minikube-local-cache-test:functional-023523"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c718
7d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"953288f8bd87ecc4c9bd836c8bd6d22dc93f1fdb06096e23ad6138c27f8bdc0c","repoDigests":["docker.io/library/76ccb4d245fa7080f03f576ab5fefbf835f48661090880707d8f6e26a31b7e91-tmp@sha256:92d9c81b7cf8c04046d34bab889bd7b8b905a063587db1d7807f5aa103ee5b61"],"repoTags":[],"size":"1466017"},{"id":"5107333e08a87b836d48ff7528b1e84b9c867
81cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"14
62480"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-023523 image ls --format json --alsologtostderr:
I0717 00:41:33.225106   22301 out.go:291] Setting OutFile to fd 1 ...
I0717 00:41:33.225196   22301 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:41:33.225204   22301 out.go:304] Setting ErrFile to fd 2...
I0717 00:41:33.225208   22301 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:41:33.225365   22301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
I0717 00:41:33.225865   22301 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:41:33.225964   22301 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:41:33.226300   22301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:41:33.226336   22301 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:41:33.240796   22301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42455
I0717 00:41:33.241234   22301 main.go:141] libmachine: () Calling .GetVersion
I0717 00:41:33.241693   22301 main.go:141] libmachine: Using API Version  1
I0717 00:41:33.241719   22301 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:41:33.242027   22301 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:41:33.242203   22301 main.go:141] libmachine: (functional-023523) Calling .GetState
I0717 00:41:33.243894   22301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:41:33.243942   22301 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:41:33.258054   22301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37191
I0717 00:41:33.258385   22301 main.go:141] libmachine: () Calling .GetVersion
I0717 00:41:33.258851   22301 main.go:141] libmachine: Using API Version  1
I0717 00:41:33.258870   22301 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:41:33.259205   22301 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:41:33.259383   22301 main.go:141] libmachine: (functional-023523) Calling .DriverName
I0717 00:41:33.259599   22301 ssh_runner.go:195] Run: systemctl --version
I0717 00:41:33.259632   22301 main.go:141] libmachine: (functional-023523) Calling .GetSSHHostname
I0717 00:41:33.262159   22301 main.go:141] libmachine: (functional-023523) DBG | domain functional-023523 has defined MAC address 52:54:00:c6:46:06 in network mk-functional-023523
I0717 00:41:33.262532   22301 main.go:141] libmachine: (functional-023523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:06", ip: ""} in network mk-functional-023523: {Iface:virbr1 ExpiryTime:2024-07-17 01:37:44 +0000 UTC Type:0 Mac:52:54:00:c6:46:06 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:functional-023523 Clientid:01:52:54:00:c6:46:06}
I0717 00:41:33.262571   22301 main.go:141] libmachine: (functional-023523) DBG | domain functional-023523 has defined IP address 192.168.39.2 and MAC address 52:54:00:c6:46:06 in network mk-functional-023523
I0717 00:41:33.262704   22301 main.go:141] libmachine: (functional-023523) Calling .GetSSHPort
I0717 00:41:33.262864   22301 main.go:141] libmachine: (functional-023523) Calling .GetSSHKeyPath
I0717 00:41:33.263004   22301 main.go:141] libmachine: (functional-023523) Calling .GetSSHUsername
I0717 00:41:33.263215   22301 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/functional-023523/id_rsa Username:docker}
I0717 00:41:33.341015   22301 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 00:41:33.388515   22301 main.go:141] libmachine: Making call to close driver server
I0717 00:41:33.388530   22301 main.go:141] libmachine: (functional-023523) Calling .Close
I0717 00:41:33.388808   22301 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:41:33.388821   22301 main.go:141] libmachine: (functional-023523) DBG | Closing plugin on server side
I0717 00:41:33.388824   22301 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:41:33.388848   22301 main.go:141] libmachine: Making call to close driver server
I0717 00:41:33.388855   22301 main.go:141] libmachine: (functional-023523) Calling .Close
I0717 00:41:33.389088   22301 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:41:33.389101   22301 main.go:141] libmachine: (functional-023523) DBG | Closing plugin on server side
I0717 00:41:33.389112   22301 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-023523 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "112194888"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "65908273"
- id: 91348a8203b323c742332e395e797bc8b9ec2d8b08f25af32703611c6d40f141
repoDigests:
- localhost/minikube-local-cache-test@sha256:c87ddb8470e2187926109e69d13b0375cbb5e8af0922df570caf18c52d967de5
repoTags:
- localhost/minikube-local-cache-test:functional-023523
size: "3330"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "63051080"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-023523
size: "4943877"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "85953433"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117609954"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-023523 image ls --format yaml --alsologtostderr:
I0717 00:41:29.005117   22183 out.go:291] Setting OutFile to fd 1 ...
I0717 00:41:29.005235   22183 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:41:29.005245   22183 out.go:304] Setting ErrFile to fd 2...
I0717 00:41:29.005251   22183 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:41:29.005500   22183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
I0717 00:41:29.006247   22183 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:41:29.006409   22183 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:41:29.006983   22183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:41:29.007041   22183 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:41:29.021824   22183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
I0717 00:41:29.022253   22183 main.go:141] libmachine: () Calling .GetVersion
I0717 00:41:29.022806   22183 main.go:141] libmachine: Using API Version  1
I0717 00:41:29.022828   22183 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:41:29.023101   22183 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:41:29.023284   22183 main.go:141] libmachine: (functional-023523) Calling .GetState
I0717 00:41:29.025145   22183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:41:29.025188   22183 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:41:29.039822   22183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
I0717 00:41:29.040163   22183 main.go:141] libmachine: () Calling .GetVersion
I0717 00:41:29.040635   22183 main.go:141] libmachine: Using API Version  1
I0717 00:41:29.040664   22183 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:41:29.040958   22183 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:41:29.041163   22183 main.go:141] libmachine: (functional-023523) Calling .DriverName
I0717 00:41:29.041394   22183 ssh_runner.go:195] Run: systemctl --version
I0717 00:41:29.041421   22183 main.go:141] libmachine: (functional-023523) Calling .GetSSHHostname
I0717 00:41:29.044271   22183 main.go:141] libmachine: (functional-023523) DBG | domain functional-023523 has defined MAC address 52:54:00:c6:46:06 in network mk-functional-023523
I0717 00:41:29.044733   22183 main.go:141] libmachine: (functional-023523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:06", ip: ""} in network mk-functional-023523: {Iface:virbr1 ExpiryTime:2024-07-17 01:37:44 +0000 UTC Type:0 Mac:52:54:00:c6:46:06 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:functional-023523 Clientid:01:52:54:00:c6:46:06}
I0717 00:41:29.044768   22183 main.go:141] libmachine: (functional-023523) DBG | domain functional-023523 has defined IP address 192.168.39.2 and MAC address 52:54:00:c6:46:06 in network mk-functional-023523
I0717 00:41:29.044888   22183 main.go:141] libmachine: (functional-023523) Calling .GetSSHPort
I0717 00:41:29.045050   22183 main.go:141] libmachine: (functional-023523) Calling .GetSSHKeyPath
I0717 00:41:29.045213   22183 main.go:141] libmachine: (functional-023523) Calling .GetSSHUsername
I0717 00:41:29.045330   22183 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/functional-023523/id_rsa Username:docker}
I0717 00:41:29.156528   22183 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 00:41:29.254542   22183 main.go:141] libmachine: Making call to close driver server
I0717 00:41:29.254576   22183 main.go:141] libmachine: (functional-023523) Calling .Close
I0717 00:41:29.254864   22183 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:41:29.254880   22183 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:41:29.254892   22183 main.go:141] libmachine: Making call to close driver server
I0717 00:41:29.254899   22183 main.go:141] libmachine: (functional-023523) Calling .Close
I0717 00:41:29.255113   22183 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:41:29.255128   22183 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023523 ssh pgrep buildkitd: exit status 1 (200.530824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image build -t localhost/my-image:functional-023523 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-023523 image build -t localhost/my-image:functional-023523 testdata/build --alsologtostderr: (3.513569228s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-023523 image build -t localhost/my-image:functional-023523 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 953288f8bd8
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-023523
--> 493686b8b69
Successfully tagged localhost/my-image:functional-023523
493686b8b69256c300218c7a0b03622c1825114fa9e71516a41a2da11b81c840
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-023523 image build -t localhost/my-image:functional-023523 testdata/build --alsologtostderr:
I0717 00:41:29.499591   22237 out.go:291] Setting OutFile to fd 1 ...
I0717 00:41:29.499881   22237 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:41:29.499891   22237 out.go:304] Setting ErrFile to fd 2...
I0717 00:41:29.499898   22237 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:41:29.500090   22237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
I0717 00:41:29.500599   22237 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:41:29.501091   22237 config.go:182] Loaded profile config "functional-023523": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:41:29.501439   22237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:41:29.501498   22237 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:41:29.515968   22237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
I0717 00:41:29.516366   22237 main.go:141] libmachine: () Calling .GetVersion
I0717 00:41:29.516792   22237 main.go:141] libmachine: Using API Version  1
I0717 00:41:29.516815   22237 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:41:29.517160   22237 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:41:29.517349   22237 main.go:141] libmachine: (functional-023523) Calling .GetState
I0717 00:41:29.519082   22237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 00:41:29.519117   22237 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 00:41:29.532838   22237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33657
I0717 00:41:29.533269   22237 main.go:141] libmachine: () Calling .GetVersion
I0717 00:41:29.533703   22237 main.go:141] libmachine: Using API Version  1
I0717 00:41:29.533725   22237 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 00:41:29.534109   22237 main.go:141] libmachine: () Calling .GetMachineName
I0717 00:41:29.534285   22237 main.go:141] libmachine: (functional-023523) Calling .DriverName
I0717 00:41:29.534499   22237 ssh_runner.go:195] Run: systemctl --version
I0717 00:41:29.534523   22237 main.go:141] libmachine: (functional-023523) Calling .GetSSHHostname
I0717 00:41:29.537047   22237 main.go:141] libmachine: (functional-023523) DBG | domain functional-023523 has defined MAC address 52:54:00:c6:46:06 in network mk-functional-023523
I0717 00:41:29.537484   22237 main.go:141] libmachine: (functional-023523) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:06", ip: ""} in network mk-functional-023523: {Iface:virbr1 ExpiryTime:2024-07-17 01:37:44 +0000 UTC Type:0 Mac:52:54:00:c6:46:06 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:functional-023523 Clientid:01:52:54:00:c6:46:06}
I0717 00:41:29.537514   22237 main.go:141] libmachine: (functional-023523) DBG | domain functional-023523 has defined IP address 192.168.39.2 and MAC address 52:54:00:c6:46:06 in network mk-functional-023523
I0717 00:41:29.537623   22237 main.go:141] libmachine: (functional-023523) Calling .GetSSHPort
I0717 00:41:29.537798   22237 main.go:141] libmachine: (functional-023523) Calling .GetSSHKeyPath
I0717 00:41:29.537961   22237 main.go:141] libmachine: (functional-023523) Calling .GetSSHUsername
I0717 00:41:29.538118   22237 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/functional-023523/id_rsa Username:docker}
I0717 00:41:29.621564   22237 build_images.go:161] Building image from path: /tmp/build.1416970014.tar
I0717 00:41:29.621635   22237 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 00:41:29.631719   22237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1416970014.tar
I0717 00:41:29.636083   22237 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1416970014.tar: stat -c "%s %y" /var/lib/minikube/build/build.1416970014.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1416970014.tar': No such file or directory
I0717 00:41:29.636115   22237 ssh_runner.go:362] scp /tmp/build.1416970014.tar --> /var/lib/minikube/build/build.1416970014.tar (3072 bytes)
I0717 00:41:29.660475   22237 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1416970014
I0717 00:41:29.670275   22237 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1416970014 -xf /var/lib/minikube/build/build.1416970014.tar
I0717 00:41:29.679116   22237 crio.go:315] Building image: /var/lib/minikube/build/build.1416970014
I0717 00:41:29.679162   22237 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-023523 /var/lib/minikube/build/build.1416970014 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0717 00:41:32.945016   22237 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-023523 /var/lib/minikube/build/build.1416970014 --cgroup-manager=cgroupfs: (3.26581727s)
I0717 00:41:32.945093   22237 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1416970014
I0717 00:41:32.960278   22237 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1416970014.tar
I0717 00:41:32.970061   22237 build_images.go:217] Built localhost/my-image:functional-023523 from /tmp/build.1416970014.tar
I0717 00:41:32.970089   22237 build_images.go:133] succeeded building to: functional-023523
I0717 00:41:32.970093   22237 build_images.go:134] failed building to: 
I0717 00:41:32.970112   22237 main.go:141] libmachine: Making call to close driver server
I0717 00:41:32.970120   22237 main.go:141] libmachine: (functional-023523) Calling .Close
I0717 00:41:32.970366   22237 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:41:32.970385   22237 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:41:32.970393   22237 main.go:141] libmachine: Making call to close driver server
I0717 00:41:32.970400   22237 main.go:141] libmachine: (functional-023523) Calling .Close
I0717 00:41:32.970410   22237 main.go:141] libmachine: (functional-023523) DBG | Closing plugin on server side
I0717 00:41:32.970655   22237 main.go:141] libmachine: Successfully made call to close driver server
I0717 00:41:32.970703   22237 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 00:41:32.970731   22237 main.go:141] libmachine: (functional-023523) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (2.862514298s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-023523
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "208.309905ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "42.278646ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "203.117784ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "41.984806ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image load --daemon docker.io/kicbase/echo-server:functional-023523 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-023523 image load --daemon docker.io/kicbase/echo-server:functional-023523 --alsologtostderr: (1.01840954s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image load --daemon docker.io/kicbase/echo-server:functional-023523 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:234: (dbg) Done: docker pull docker.io/kicbase/echo-server:latest: (1.193396538s)
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-023523
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image load --daemon docker.io/kicbase/echo-server:functional-023523 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image save docker.io/kicbase/echo-server:functional-023523 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image rm docker.io/kicbase/echo-server:functional-023523 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-023523
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 image save --daemon docker.io/kicbase/echo-server:functional-023523 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-023523
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (60.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-023523 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-023523 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-cff5v" [82159e3d-9613-465f-936c-7b8d0ab66aad] Pending
helpers_test.go:344: "hello-node-6d85cfcfd8-cff5v" [82159e3d-9613-465f-936c-7b8d0ab66aad] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-cff5v" [82159e3d-9613-465f-936c-7b8d0ab66aad] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 1m0.004224642s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (60.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdany-port4244047740/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721176885613431385" to /tmp/TestFunctionalparallelMountCmdany-port4244047740/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721176885613431385" to /tmp/TestFunctionalparallelMountCmdany-port4244047740/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721176885613431385" to /tmp/TestFunctionalparallelMountCmdany-port4244047740/001/test-1721176885613431385
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (200.333386ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 00:41 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 00:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 00:41 test-1721176885613431385
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh cat /mount-9p/test-1721176885613431385
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-023523 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e2305f04-cb6a-472a-828d-4da88721a51c] Pending
helpers_test.go:344: "busybox-mount" [e2305f04-cb6a-472a-828d-4da88721a51c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e2305f04-cb6a-472a-828d-4da88721a51c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e2305f04-cb6a-472a-828d-4da88721a51c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004350205s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-023523 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdany-port4244047740/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 service list -o json
functional_test.go:1490: Took "443.203672ms" to run "out/minikube-linux-amd64 -p functional-023523 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.2:32543
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.2:32543
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdspecific-port3318578296/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (224.91543ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdspecific-port3318578296/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023523 ssh "sudo umount -f /mount-9p": exit status 1 (183.988247ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-023523 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdspecific-port3318578296/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2353018226/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2353018226/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2353018226/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T" /mount1: exit status 1 (199.490764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-023523 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-023523 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2353018226/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2353018226/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-023523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2353018226/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/07/17 00:41:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-023523
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-023523
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-023523
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (293.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-029113 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 00:45:17.183350   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:17.188729   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:17.199012   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:17.219322   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:17.259646   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:17.339977   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:17.500389   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:17.820962   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:18.462033   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:19.742486   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:22.302677   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:27.423644   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:37.664265   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:45:58.145393   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:46:39.105970   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 00:47:58.379249   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 00:48:01.026740   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-029113 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m52.408562595s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (293.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-029113 -- rollout status deployment/busybox: (5.358937069s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-l4ctd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-pf5xn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-w8w7k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-l4ctd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-pf5xn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-w8w7k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-l4ctd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-pf5xn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-w8w7k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-l4ctd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-l4ctd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-pf5xn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-pf5xn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-w8w7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-029113 -- exec busybox-fc5497c4f-w8w7k -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-029113 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-029113 -v=7 --alsologtostderr: (1m0.217987183s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-029113 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp testdata/cp-test.txt ha-029113:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile695400083/001/cp-test_ha-029113.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113:/home/docker/cp-test.txt ha-029113-m02:/home/docker/cp-test_ha-029113_ha-029113-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m02 "sudo cat /home/docker/cp-test_ha-029113_ha-029113-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113:/home/docker/cp-test.txt ha-029113-m03:/home/docker/cp-test_ha-029113_ha-029113-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m03 "sudo cat /home/docker/cp-test_ha-029113_ha-029113-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113:/home/docker/cp-test.txt ha-029113-m04:/home/docker/cp-test_ha-029113_ha-029113-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m04 "sudo cat /home/docker/cp-test_ha-029113_ha-029113-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp testdata/cp-test.txt ha-029113-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile695400083/001/cp-test_ha-029113-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m02:/home/docker/cp-test.txt ha-029113:/home/docker/cp-test_ha-029113-m02_ha-029113.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113 "sudo cat /home/docker/cp-test_ha-029113-m02_ha-029113.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m02:/home/docker/cp-test.txt ha-029113-m03:/home/docker/cp-test_ha-029113-m02_ha-029113-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m03 "sudo cat /home/docker/cp-test_ha-029113-m02_ha-029113-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m02:/home/docker/cp-test.txt ha-029113-m04:/home/docker/cp-test_ha-029113-m02_ha-029113-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m04 "sudo cat /home/docker/cp-test_ha-029113-m02_ha-029113-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp testdata/cp-test.txt ha-029113-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile695400083/001/cp-test_ha-029113-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt ha-029113:/home/docker/cp-test_ha-029113-m03_ha-029113.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113 "sudo cat /home/docker/cp-test_ha-029113-m03_ha-029113.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt ha-029113-m02:/home/docker/cp-test_ha-029113-m03_ha-029113-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m02 "sudo cat /home/docker/cp-test_ha-029113-m03_ha-029113-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m03:/home/docker/cp-test.txt ha-029113-m04:/home/docker/cp-test_ha-029113-m03_ha-029113-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m04 "sudo cat /home/docker/cp-test_ha-029113-m03_ha-029113-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp testdata/cp-test.txt ha-029113-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile695400083/001/cp-test_ha-029113-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt ha-029113:/home/docker/cp-test_ha-029113-m04_ha-029113.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113 "sudo cat /home/docker/cp-test_ha-029113-m04_ha-029113.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt ha-029113-m02:/home/docker/cp-test_ha-029113-m04_ha-029113-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m02 "sudo cat /home/docker/cp-test_ha-029113-m04_ha-029113-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 cp ha-029113-m04:/home/docker/cp-test.txt ha-029113-m03:/home/docker/cp-test_ha-029113-m04_ha-029113-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 ssh -n ha-029113-m03 "sudo cat /home/docker/cp-test_ha-029113-m04_ha-029113-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.453636726s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-029113 node delete m03 -v=7 --alsologtostderr: (16.220908338s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (352.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-029113 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 01:02:58.379279   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
E0717 01:05:17.179654   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-029113 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m51.299534358s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (352.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-029113 --control-plane -v=7 --alsologtostderr
E0717 01:07:58.379905   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-029113 --control-plane -v=7 --alsologtostderr: (1m20.690126698s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-029113 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-101269 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-101269 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (54.542747496s)
--- PASS: TestJSONOutput/start/Command (54.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-101269 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-101269 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-101269 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-101269 --output=json --user=testUser: (7.327598566s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-790017 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-790017 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.840676ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db184b4f-e835-44e3-bdc3-f9b000d99dd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-790017] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"53f3d2b5-dee6-4157-ba62-a2efd9f17b41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19264"}}
	{"specversion":"1.0","id":"92d8e52c-b252-48dd-88dd-ff77edb8a052","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"003320af-f67c-4426-8452-c93f415341fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig"}}
	{"specversion":"1.0","id":"ad32062f-abc8-448c-b5b0-e9526af6f413","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube"}}
	{"specversion":"1.0","id":"5ad00152-5605-4fc2-a447-3160094576a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"77b15e31-ac64-45cd-a110-4cbd5ada758b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3d5c3aaf-995e-4ef5-9f98-e3f7466d758f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-790017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-790017
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-361243 --driver=kvm2  --container-runtime=crio
E0717 01:10:17.182588   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-361243 --driver=kvm2  --container-runtime=crio: (41.572471854s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-363700 --driver=kvm2  --container-runtime=crio
E0717 01:11:01.426250   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-363700 --driver=kvm2  --container-runtime=crio: (40.803571129s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-361243
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-363700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-363700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-363700
helpers_test.go:175: Cleaning up "first-361243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-361243
--- PASS: TestMinikubeProfile (84.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-709329 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-709329 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.058080169s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-709329 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-709329 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-722519 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-722519 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.68464236s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-722519 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-722519 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-709329 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-722519 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-722519 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-722519
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-722519: (1.278138866s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-722519
E0717 01:12:58.379222   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-722519: (21.593047739s)
--- PASS: TestMountStart/serial/RestartStopped (22.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-722519 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-722519 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (124.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-025900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-025900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m4.243976335s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (124.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-025900 -- rollout status deployment/busybox: (5.322222966s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- exec busybox-fc5497c4f-mn98f -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- exec busybox-fc5497c4f-srx86 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- exec busybox-fc5497c4f-mn98f -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- exec busybox-fc5497c4f-srx86 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- exec busybox-fc5497c4f-mn98f -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- exec busybox-fc5497c4f-srx86 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- exec busybox-fc5497c4f-mn98f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- exec busybox-fc5497c4f-mn98f -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- exec busybox-fc5497c4f-srx86 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-025900 -- exec busybox-fc5497c4f-srx86 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-025900 -v 3 --alsologtostderr
E0717 01:15:17.179218   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-025900 -v 3 --alsologtostderr: (49.383535202s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-025900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp testdata/cp-test.txt multinode-025900:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp multinode-025900:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3937052499/001/cp-test_multinode-025900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp multinode-025900:/home/docker/cp-test.txt multinode-025900-m02:/home/docker/cp-test_multinode-025900_multinode-025900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m02 "sudo cat /home/docker/cp-test_multinode-025900_multinode-025900-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp multinode-025900:/home/docker/cp-test.txt multinode-025900-m03:/home/docker/cp-test_multinode-025900_multinode-025900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m03 "sudo cat /home/docker/cp-test_multinode-025900_multinode-025900-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp testdata/cp-test.txt multinode-025900-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp multinode-025900-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3937052499/001/cp-test_multinode-025900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp multinode-025900-m02:/home/docker/cp-test.txt multinode-025900:/home/docker/cp-test_multinode-025900-m02_multinode-025900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900 "sudo cat /home/docker/cp-test_multinode-025900-m02_multinode-025900.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp multinode-025900-m02:/home/docker/cp-test.txt multinode-025900-m03:/home/docker/cp-test_multinode-025900-m02_multinode-025900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m03 "sudo cat /home/docker/cp-test_multinode-025900-m02_multinode-025900-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp testdata/cp-test.txt multinode-025900-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp multinode-025900-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3937052499/001/cp-test_multinode-025900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp multinode-025900-m03:/home/docker/cp-test.txt multinode-025900:/home/docker/cp-test_multinode-025900-m03_multinode-025900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900 "sudo cat /home/docker/cp-test_multinode-025900-m03_multinode-025900.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 cp multinode-025900-m03:/home/docker/cp-test.txt multinode-025900-m02:/home/docker/cp-test_multinode-025900-m03_multinode-025900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 ssh -n multinode-025900-m02 "sudo cat /home/docker/cp-test_multinode-025900-m03_multinode-025900-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-025900 node stop m03: (1.502754138s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-025900 status: exit status 7 (423.570978ms)

                                                
                                                
-- stdout --
	multinode-025900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-025900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-025900 status --alsologtostderr: exit status 7 (416.945665ms)

                                                
                                                
-- stdout --
	multinode-025900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-025900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-025900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:16:12.196052   40871 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:16:12.196146   40871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:16:12.196154   40871 out.go:304] Setting ErrFile to fd 2...
	I0717 01:16:12.196158   40871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:16:12.196412   40871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:16:12.196575   40871 out.go:298] Setting JSON to false
	I0717 01:16:12.196601   40871 mustload.go:65] Loading cluster: multinode-025900
	I0717 01:16:12.196718   40871 notify.go:220] Checking for updates...
	I0717 01:16:12.197000   40871 config.go:182] Loaded profile config "multinode-025900": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:16:12.197015   40871 status.go:255] checking status of multinode-025900 ...
	I0717 01:16:12.197451   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:16:12.197494   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:16:12.213085   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0717 01:16:12.213562   40871 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:16:12.214224   40871 main.go:141] libmachine: Using API Version  1
	I0717 01:16:12.214245   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:16:12.214663   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:16:12.214891   40871 main.go:141] libmachine: (multinode-025900) Calling .GetState
	I0717 01:16:12.216619   40871 status.go:330] multinode-025900 host status = "Running" (err=<nil>)
	I0717 01:16:12.216637   40871 host.go:66] Checking if "multinode-025900" exists ...
	I0717 01:16:12.216952   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:16:12.216998   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:16:12.232166   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40415
	I0717 01:16:12.232564   40871 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:16:12.233011   40871 main.go:141] libmachine: Using API Version  1
	I0717 01:16:12.233033   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:16:12.233428   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:16:12.233619   40871 main.go:141] libmachine: (multinode-025900) Calling .GetIP
	I0717 01:16:12.236377   40871 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:16:12.236764   40871 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:16:12.236801   40871 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:16:12.236873   40871 host.go:66] Checking if "multinode-025900" exists ...
	I0717 01:16:12.237154   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:16:12.237185   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:16:12.253351   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33395
	I0717 01:16:12.253768   40871 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:16:12.254330   40871 main.go:141] libmachine: Using API Version  1
	I0717 01:16:12.254349   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:16:12.254681   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:16:12.254855   40871 main.go:141] libmachine: (multinode-025900) Calling .DriverName
	I0717 01:16:12.255052   40871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 01:16:12.255084   40871 main.go:141] libmachine: (multinode-025900) Calling .GetSSHHostname
	I0717 01:16:12.257705   40871 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:16:12.258098   40871 main.go:141] libmachine: (multinode-025900) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:d8:11", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:13:14 +0000 UTC Type:0 Mac:52:54:00:20:d8:11 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-025900 Clientid:01:52:54:00:20:d8:11}
	I0717 01:16:12.258136   40871 main.go:141] libmachine: (multinode-025900) DBG | domain multinode-025900 has defined IP address 192.168.39.81 and MAC address 52:54:00:20:d8:11 in network mk-multinode-025900
	I0717 01:16:12.258272   40871 main.go:141] libmachine: (multinode-025900) Calling .GetSSHPort
	I0717 01:16:12.258434   40871 main.go:141] libmachine: (multinode-025900) Calling .GetSSHKeyPath
	I0717 01:16:12.258593   40871 main.go:141] libmachine: (multinode-025900) Calling .GetSSHUsername
	I0717 01:16:12.258712   40871 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/multinode-025900/id_rsa Username:docker}
	I0717 01:16:12.342097   40871 ssh_runner.go:195] Run: systemctl --version
	I0717 01:16:12.348353   40871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:16:12.362245   40871 kubeconfig.go:125] found "multinode-025900" server: "https://192.168.39.81:8443"
	I0717 01:16:12.362270   40871 api_server.go:166] Checking apiserver status ...
	I0717 01:16:12.362302   40871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:16:12.376564   40871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0717 01:16:12.387417   40871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 01:16:12.387491   40871 ssh_runner.go:195] Run: ls
	I0717 01:16:12.393224   40871 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0717 01:16:12.397530   40871 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I0717 01:16:12.397555   40871 status.go:422] multinode-025900 apiserver status = Running (err=<nil>)
	I0717 01:16:12.397566   40871 status.go:257] multinode-025900 status: &{Name:multinode-025900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:16:12.397587   40871 status.go:255] checking status of multinode-025900-m02 ...
	I0717 01:16:12.397970   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:16:12.398012   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:16:12.414366   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I0717 01:16:12.414743   40871 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:16:12.415265   40871 main.go:141] libmachine: Using API Version  1
	I0717 01:16:12.415295   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:16:12.415572   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:16:12.415760   40871 main.go:141] libmachine: (multinode-025900-m02) Calling .GetState
	I0717 01:16:12.417109   40871 status.go:330] multinode-025900-m02 host status = "Running" (err=<nil>)
	I0717 01:16:12.417122   40871 host.go:66] Checking if "multinode-025900-m02" exists ...
	I0717 01:16:12.417424   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:16:12.417481   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:16:12.432123   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33183
	I0717 01:16:12.432446   40871 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:16:12.432911   40871 main.go:141] libmachine: Using API Version  1
	I0717 01:16:12.432938   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:16:12.433257   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:16:12.433438   40871 main.go:141] libmachine: (multinode-025900-m02) Calling .GetIP
	I0717 01:16:12.436020   40871 main.go:141] libmachine: (multinode-025900-m02) DBG | domain multinode-025900-m02 has defined MAC address 52:54:00:d3:9f:56 in network mk-multinode-025900
	I0717 01:16:12.436328   40871 main.go:141] libmachine: (multinode-025900-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:9f:56", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:14:27 +0000 UTC Type:0 Mac:52:54:00:d3:9f:56 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-025900-m02 Clientid:01:52:54:00:d3:9f:56}
	I0717 01:16:12.436363   40871 main.go:141] libmachine: (multinode-025900-m02) DBG | domain multinode-025900-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:d3:9f:56 in network mk-multinode-025900
	I0717 01:16:12.436452   40871 host.go:66] Checking if "multinode-025900-m02" exists ...
	I0717 01:16:12.436740   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:16:12.436775   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:16:12.451249   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I0717 01:16:12.451648   40871 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:16:12.452096   40871 main.go:141] libmachine: Using API Version  1
	I0717 01:16:12.452118   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:16:12.452372   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:16:12.452533   40871 main.go:141] libmachine: (multinode-025900-m02) Calling .DriverName
	I0717 01:16:12.452706   40871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 01:16:12.452723   40871 main.go:141] libmachine: (multinode-025900-m02) Calling .GetSSHHostname
	I0717 01:16:12.455107   40871 main.go:141] libmachine: (multinode-025900-m02) DBG | domain multinode-025900-m02 has defined MAC address 52:54:00:d3:9f:56 in network mk-multinode-025900
	I0717 01:16:12.455526   40871 main.go:141] libmachine: (multinode-025900-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:9f:56", ip: ""} in network mk-multinode-025900: {Iface:virbr1 ExpiryTime:2024-07-17 02:14:27 +0000 UTC Type:0 Mac:52:54:00:d3:9f:56 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:multinode-025900-m02 Clientid:01:52:54:00:d3:9f:56}
	I0717 01:16:12.455562   40871 main.go:141] libmachine: (multinode-025900-m02) DBG | domain multinode-025900-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:d3:9f:56 in network mk-multinode-025900
	I0717 01:16:12.455693   40871 main.go:141] libmachine: (multinode-025900-m02) Calling .GetSSHPort
	I0717 01:16:12.455862   40871 main.go:141] libmachine: (multinode-025900-m02) Calling .GetSSHKeyPath
	I0717 01:16:12.455998   40871 main.go:141] libmachine: (multinode-025900-m02) Calling .GetSSHUsername
	I0717 01:16:12.456124   40871 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19264-3908/.minikube/machines/multinode-025900-m02/id_rsa Username:docker}
	I0717 01:16:12.537948   40871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:16:12.552967   40871 status.go:257] multinode-025900-m02 status: &{Name:multinode-025900-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:16:12.553002   40871 status.go:255] checking status of multinode-025900-m03 ...
	I0717 01:16:12.553384   40871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 01:16:12.553439   40871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 01:16:12.569512   40871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0717 01:16:12.569920   40871 main.go:141] libmachine: () Calling .GetVersion
	I0717 01:16:12.570384   40871 main.go:141] libmachine: Using API Version  1
	I0717 01:16:12.570412   40871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 01:16:12.570725   40871 main.go:141] libmachine: () Calling .GetMachineName
	I0717 01:16:12.570933   40871 main.go:141] libmachine: (multinode-025900-m03) Calling .GetState
	I0717 01:16:12.572601   40871 status.go:330] multinode-025900-m03 host status = "Stopped" (err=<nil>)
	I0717 01:16:12.572618   40871 status.go:343] host is not running, skipping remaining checks
	I0717 01:16:12.572626   40871 status.go:257] multinode-025900-m03 status: &{Name:multinode-025900-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-025900 node start m03 -v=7 --alsologtostderr: (39.203161283s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-025900 node delete m03: (1.702922122s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-025900 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 01:25:17.183136   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 01:27:41.426672   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-025900 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m0.970379752s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-025900 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-025900
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-025900-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-025900-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.02069ms)

                                                
                                                
-- stdout --
	* [multinode-025900-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-025900-m02' is duplicated with machine name 'multinode-025900-m02' in profile 'multinode-025900'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-025900-m03 --driver=kvm2  --container-runtime=crio
E0717 01:27:58.380098   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-025900-m03 --driver=kvm2  --container-runtime=crio: (40.226937613s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-025900
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-025900: exit status 80 (206.53394ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-025900 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-025900-m03 already exists in multinode-025900-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-025900-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.26s)

                                                
                                    
x
+
TestScheduledStopUnix (110.41s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-699840 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-699840 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.882972178s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-699840 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-699840 -n scheduled-stop-699840
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-699840 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-699840 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-699840 -n scheduled-stop-699840
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-699840
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-699840 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0717 01:35:00.230687   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 01:35:17.179178   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-699840
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-699840: exit status 7 (63.534392ms)

                                                
                                                
-- stdout --
	scheduled-stop-699840
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-699840 -n scheduled-stop-699840
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-699840 -n scheduled-stop-699840: exit status 7 (63.195784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-699840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-699840
--- PASS: TestScheduledStopUnix (110.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (217.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2018199825 start -p running-upgrade-777345 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0717 01:37:58.379249   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2018199825 start -p running-upgrade-777345 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m56.775186255s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-777345 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-777345 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.988783801s)
helpers_test.go:175: Cleaning up "running-upgrade-777345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-777345
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-777345: (1.133814611s)
--- PASS: TestRunningBinaryUpgrade (217.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-130517 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-130517 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.346561ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-130517] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (89.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-130517 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-130517 --driver=kvm2  --container-runtime=crio: (1m29.216377141s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-130517 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (89.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-894370 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-894370 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.685411ms)

                                                
                                                
-- stdout --
	* [false-894370] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 01:35:22.273529   49345 out.go:291] Setting OutFile to fd 1 ...
	I0717 01:35:22.273796   49345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:35:22.273807   49345 out.go:304] Setting ErrFile to fd 2...
	I0717 01:35:22.273811   49345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:35:22.274009   49345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19264-3908/.minikube/bin
	I0717 01:35:22.274607   49345 out.go:298] Setting JSON to false
	I0717 01:35:22.275477   49345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4664,"bootTime":1721175458,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 01:35:22.275528   49345 start.go:139] virtualization: kvm guest
	I0717 01:35:22.277696   49345 out.go:177] * [false-894370] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 01:35:22.279083   49345 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 01:35:22.279151   49345 notify.go:220] Checking for updates...
	I0717 01:35:22.281806   49345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 01:35:22.283028   49345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19264-3908/kubeconfig
	I0717 01:35:22.284294   49345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19264-3908/.minikube
	I0717 01:35:22.285465   49345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 01:35:22.286682   49345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 01:35:22.288311   49345 config.go:182] Loaded profile config "NoKubernetes-130517": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:35:22.288421   49345 config.go:182] Loaded profile config "force-systemd-env-195512": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:35:22.288519   49345 config.go:182] Loaded profile config "offline-crio-089839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 01:35:22.288640   49345 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 01:35:22.325524   49345 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 01:35:22.327104   49345 start.go:297] selected driver: kvm2
	I0717 01:35:22.327122   49345 start.go:901] validating driver "kvm2" against <nil>
	I0717 01:35:22.327138   49345 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 01:35:22.329327   49345 out.go:177] 
	W0717 01:35:22.330509   49345 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 01:35:22.331709   49345 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-894370 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-894370" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-894370

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-894370"

                                                
                                                
----------------------- debugLogs end: false-894370 [took: 2.533578475s] --------------------------------
helpers_test.go:175: Cleaning up "false-894370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-894370
--- PASS: TestNetworkPlugins/group/false (2.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-130517 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-130517 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.563394795s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-130517 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-130517 status -o json: exit status 2 (222.594517ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-130517","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-130517
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-130517: (1.021680212s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (46.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-130517 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-130517 --no-kubernetes --driver=kvm2  --container-runtime=crio: (46.587123991s)
--- PASS: TestNoKubernetes/serial/Start (46.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-130517 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-130517 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.41356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-130517
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-130517: (1.326685439s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (62.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-130517 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-130517 --driver=kvm2  --container-runtime=crio: (1m2.627214406s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (62.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-130517 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-130517 "sudo systemctl is-active --quiet service kubelet": exit status 1 (182.658996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (97.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4104360677 start -p stopped-upgrade-156268 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4104360677 start -p stopped-upgrade-156268 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (52.301869981s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4104360677 -p stopped-upgrade-156268 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4104360677 -p stopped-upgrade-156268 stop: (2.133953382s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-156268 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-156268 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.45654931s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (97.89s)

                                                
                                    
x
+
TestPause/serial/Start (96.26s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-056024 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-056024 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m36.260276318s)
--- PASS: TestPause/serial/Start (96.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-156268
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (112.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m52.373472118s)
--- PASS: TestNetworkPlugins/group/auto/Start (112.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (103.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m43.288996718s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (103.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-894370 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-894370 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vt7zx" [5d9ec2a7-d971-450c-9f27-a9796ceb55fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vt7zx" [5d9ec2a7-d971-450c-9f27-a9796ceb55fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003463043s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tjrjz" [5175b71b-f875-4cd6-b743-a3b9059ac1d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004383296s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-894370 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-894370 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lh6bn" [9e65d35a-25cf-497d-b769-df7724276216] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lh6bn" [9e65d35a-25cf-497d-b769-df7724276216] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004007898s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-894370 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m27.956956026s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-894370 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m35.998397532s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (96.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m40.2962848s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (122.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0717 01:44:21.427078   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/addons-384227/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m2.870189708s)
--- PASS: TestNetworkPlugins/group/flannel/Start (122.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-mnjtb" [2ab6136a-cd08-4e79-bfe9-ef2582daa352] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.087380734s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-894370 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-894370 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5rstl" [d1ecf372-52fc-4c4a-97cf-cf4b70f583e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5rstl" [d1ecf372-52fc-4c4a-97cf-cf4b70f583e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003682776s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-894370 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-894370 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-894370 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6qsnj" [96455565-29f3-4dee-8cea-fbe6217adf93] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6qsnj" [96455565-29f3-4dee-8cea-fbe6217adf93] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004189791s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-894370 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-894370 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qxrcw" [d71fc4c3-2e1c-46be-85c7-16f68321b407] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qxrcw" [d71fc4c3-2e1c-46be-85c7-16f68321b407] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005070652s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-894370 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-894370 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m42.265891289s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-894370 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (164.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-391501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-391501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (2m44.777592809s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (164.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-srh77" [53517671-84fe-4411-ac69-e2bb7dabdf21] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00398135s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-894370 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-894370 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gtfbx" [a96f610d-cdb9-492d-829e-d667c7c0a9ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gtfbx" [a96f610d-cdb9-492d-829e-d667c7c0a9ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.037930963s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-894370 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-940222 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-940222 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m2.948149531s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-894370 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-894370 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2ssxm" [1c768038-565f-4e7f-a89c-0cd99984370d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2ssxm" [1c768038-565f-4e7f-a89c-0cd99984370d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003851607s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-894370 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-894370 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0717 02:16:03.458917   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-738184 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-738184 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (55.393495441s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-940222 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [44f768a2-54fc-4549-a808-df47ce510fc9] Pending
helpers_test.go:344: "busybox" [44f768a2-54fc-4549-a808-df47ce510fc9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [44f768a2-54fc-4549-a808-df47ce510fc9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005080981s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-940222 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-940222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-940222 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-738184 create -f testdata/busybox.yaml
E0717 01:48:20.869866   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/kindnet-894370/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [593e2c6d-7dfd-4341-8cd6-a6555c12c9bb] Pending
helpers_test.go:344: "busybox" [593e2c6d-7dfd-4341-8cd6-a6555c12c9bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [593e2c6d-7dfd-4341-8cd6-a6555c12c9bb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.004025351s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-738184 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-391501 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3b947b99-0082-478b-a2d5-79b1909fa513] Pending
helpers_test.go:344: "busybox" [3b947b99-0082-478b-a2d5-79b1909fa513] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3b947b99-0082-478b-a2d5-79b1909fa513] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.003932571s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-391501 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-738184 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-738184 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-391501 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-391501 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (636.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-940222 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 01:50:23.745029   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:50:25.140840   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-940222 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (10m36.334562804s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-940222 -n embed-certs-940222
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (636.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (530.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-738184 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 01:51:06.017830   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-738184 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (8m49.952430392s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-738184 -n default-k8s-diff-port-738184
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (530.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (664.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-391501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 01:51:13.699382   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:51:23.940358   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
E0717 01:51:25.185965   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/custom-flannel-894370/client.crt: no such file or directory
E0717 01:51:36.822825   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/enable-default-cni-894370/client.crt: no such file or directory
E0717 01:51:40.231740   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/functional-023523/client.crt: no such file or directory
E0717 01:51:44.420914   11259 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19264-3908/.minikube/profiles/flannel-894370/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-391501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (11m4.721768257s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-391501 -n no-preload-391501
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (664.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-901761 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-901761 --alsologtostderr -v=3: (1.338111215s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-901761 -n old-k8s-version-901761: exit status 7 (61.521819ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-901761 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-386113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-386113 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (48.360303385s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-386113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-386113 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.405396785s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-386113 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-386113 --alsologtostderr -v=3: (7.404765736s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-386113 -n newest-cni-386113
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-386113 -n newest-cni-386113: exit status 7 (64.981186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-386113 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.2/cached-images 0
15 TestDownloadOnly/v1.30.2/binaries 0
16 TestDownloadOnly/v1.30.2/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
50 TestAddons/parallel/Volcano 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 2.63
272 TestNetworkPlugins/group/cilium 3.25
285 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-894370 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-894370" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-894370

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-894370"

                                                
                                                
----------------------- debugLogs end: kubenet-894370 [took: 2.494692207s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-894370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-894370
--- SKIP: TestNetworkPlugins/group/kubenet (2.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-894370 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-894370" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-894370

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-894370" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-894370"

                                                
                                                
----------------------- debugLogs end: cilium-894370 [took: 3.111312125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-894370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-894370
--- SKIP: TestNetworkPlugins/group/cilium (3.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-255698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-255698
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard